Sponsor me on to support more content like this.

用中文阅读

Introduction - Part 2 Docker and go-micro

, we covered the basics of writing a gRPC based microservice. In this part; we will cover the basics of Dockerising a service, we will also be updating our service to use go-micro, and finally, introducing a second service.

Introducing Docker.

With the advent of cloud computing, and the birth of microservices. The pressure to deploy more, but smaller chunks of code at a time has led to some interesting new ideas and technologies. One of which being the concept of .

Traditionally, teams would deploy a monolith to static servers, running a set operating system, with a predefined set of dependencies to keep track of. Or maybe on a VM provisioned by Chef or Puppet for example. Scaling was expensive and not all that effective. The most common option was vertical scaling, i.e throwing more and more resources at static servers.

Tools like vagrant came along and made provisioning VM's fairly trivial. But running a VM was still a fairly heft operation. You were running a full operating system in all its glory, kernel and all, within your host machine. In terms of resources, this is pretty expensive. So when microservices came along, it became infeasible to run so many separate codebases in their own environments.

are slimmed down versions of an operating system. Containers don't contain a kernel, a guest OS or any of the lower level components which would typically make up an OS.

Containers only contain the top level libraries and its run-time components. The kernel is shared across the host machine. So the host machine runs a single Unix kernel, which is then shared by n amount of containers, running very different sets of run-times.

Under the hood, containers utilise various kernel utilities, in order to share resources and network functionality across the container space.

Further reading

This means you can run the run-time and the dependencies your code needs, without booting up several complete operating systems. This is a game changer because the overall size of a container vs a VM, is magnitudes smaller. Ubuntu for example, is typically a little under 1GB in size. Whereas its Docker image counterpart is a mere 188mb.

You will notice I spoke more broadly of containers in that introduction, rather than 'Docker containers'. It's common to think that and containers are the same thing. However, containers are more of a concept or set of capabilities within Linux. Docker is just a flavor of containers, which became popular largely due to its ease of use. There are , too. But we'll be sticking with Docker as it's in my opinion the best supported, and the simplest for newcomers.

So now hopefully you see the value in containerisation, we can start Dockerising our first service. Let's create a Dockerfile .

In that file, add the following:

If you're running on Linux, you might run into issues using Alpine. So if you're following this article on a Linux machine, simply replace alpine with debian, and you should be good to go. We'll touch on an even better way to build our binaries later on.

First of all, we are pulling in the latest Linux Alpine image. is a light-weight Linux distribution, developed and optimised for running Dockerised web applications. In other words, Linux Alpine has just enough dependencies and run-time functionality to run most applications. This means its image size is around 8mb(!!). Which compared with say… an Ubuntu VM at around 1GB, you can start to see why Docker images became a more natural fit for microservices and cloud computing.

Next we create a new directory to house our application, and set the context directory to our new directory. This is so that our app directory is the default directory. We then add our compiled binary into our Docker container, and run it.

Now let's update our Makefile's build entry to build our docker image.

  1. build:
  2. ...
  3. GOOS=linux GOARCH=amd64 go build
  4. docker build -t shippy-service-consignment .

We've added two more steps here, and I'd like to explain them in a little more detail. First of all, we're building our Go binary. You will notice two environment variables are being set before we run $ go build however. GOOS and GOARCH allow you to cross-compile your go binary for another operating system. Since I'm developing on a Macbook, I can't compile a go binary, and then run it within a Docker container, which uses Linux. The binary will be completely meaningless within your Docker container and it will throw an error.

The second step I added is the docker build process. This will read your Dockerfile, and build an image by the name consignment-service, the period denotes a directory path, so here we just want the build process to look in the current directory.

I'm going to add a new entry in our Makefile:

  1. run:
  2. docker run -p 50051:50051 shippy-service-consignment

Here, we run our consignment-service docker image, exposing the port 50051. Because Docker runs on a separate networking layer, you need to forward the port used within your Docker container, to your host. You can forward the internal port to a new port on the host by changing the first segment. For example, if you wanted to run this service on port 8080, you would change the -p argument to 8080:50051. You can also run a container in the background by including a -d flag. For example docker run -d -p 50051:50051 consignment-service.

.

When you run $ docker build, you are building your code and run-time environment into an image. Docker images are portable snapshots of your environment, its dependencies. You can share docker images by publishing them to docker hub. Which is like a sort of npm, or yum repo for docker images. When you define a FROM in your Dockerfile, you are telling docker to pull that image from docker hub to use as your base. You can then extend and override parts of that base file, by re-defining them in your own. We won't be publishing our docker images, but feel free to peruse docker hub, and note how just about any piece of software has been containerised already. Some really have been Dockerised.

Each declaration within a Dockerfile is cached when it's first built. This saves having to re-build the entire run-time each time you make a change. Docker is clever enough to work out which parts have changed, and which parts needs re-building. This makes the build process incredibly quick.

Enough about containers! Let's get back to our code.

When creating a gRPC service, there's quite a lot of boilerplate code for creating connections, and you have to hard-code the location of the service address into a client, or other service in order for it to connect to it. This is tricky, because when you are running services in the cloud, they may not share the same host, or the address or ip may change after re-deploying a service.

This is where service discovery comes into play. Service discovery keeps an up-to-date catalogue of all your services and their locations. Each service registers itself at runtime, and de-registers itself on closure. Each service then has a name or id assigned to it. So that even though it may have a new IP address, or host address, as long as the service name remains the same, you don't need to update calls to this service from your other services.

Typically, there are many approaches to this problem, but like most things in programming, if someone has tackled this problem already, there's no point re-inventing the wheel. One person who has tackled these problems with fantastic clarity and ease of use, is @chuhnk (Asim Aslam), creator of Go-micro. He is very much a one man army, producing some fantastic software. Please consider helping him out if you like what you see!

Go-micro

Go-micro is a powerful microservice framework written in Go, for use, for the most part with Go. However you can use in order to interface with other languages also.

Go-micro has useful features for making microservices in Go trivial. But we'll start with probably the most common issue it solves, and that's service discovery.

We will need to make a few updates to our service in order to work with go-micro. Go-micro integrates as a protoc plugin, in this case replacing the standard gRPC plugin we're currently using. So let's start by replacing that in our Makefile.

Be sure to install the go-micro dependencies:

  1. $ go get -u github.com/micro/protobuf/{proto,protoc-gen-go}
  1. build:
  2. protoc -I. --go_out=plugins=micro:. \
  3. proto/consignment/consignment.proto
  4. ...
  5. ...

We have updated our Makefile to use the go-micro plug-in, instead of the gRPC plugin. Now we will need to update our shippy-service-consignment/main.go file to use go-micro. This will abstract much of our previous gRPC code. It handles registering and spinning up our service with ease.

The main changes here are the way in which we instantiate our gRPC server, which has been abstracted neatly behind a mico.NewService() method, which handles registering our service. And finally, the service.Run() function, which handles the connection itself. Similar as before, we register our implementation, but this time using a slightly different method.

The second biggest changes are to the service methods themselves, the arguments and response types have changes slightly to take both the request and the response structs as arguments, and now only returning an error. Within our methods, we set the response, which is handled by go-micro.

Finally, we are no longer hard-coding the port. Go-micro should be configured using environment variables, or command line arguments. To set the address, use MICRO_SERVER_ADDRESS=:50051. By default, Micro utilises mdns (multicast dns) as the service discovery broker for local use. You wouldn't typically use for service discovery in production, but we want to avoid having to run something like Consul or etcd locally for the sakes of testing. More on this in a later post.

Let's update our Makefile to reflect this.

  1. run:
  2. docker run -p 50051:50051 \
  3. -e MICRO_SERVER_ADDRESS=:50051 \
  4. shippy-service-consignment

The -e is an environment variable flag, this allows you to pass in environment variables into your Docker container. You must have a flag per variable, for example -e ENV=staging -e DB_HOST=localhost etc.

Now if you run $ make run, you will have a Dockerised service, with service discovery. So let's update our cli tool to utilise this.

  1. import (
  2. ...
  3. micro "github.com/micro/go-micro"
  4. )
  5. func main() {
  6. service := micro.NewService(micro.Name("shippy.consignment.cli"))
  7. service.Init()
  8. client := pb.NewShippingServiceClient("shippy.consignment.service", service.Client())
  9. ...
  10. }

See here for full file

Here we've imported the go-micro libraries for creating clients, and replaced our existing connection code, with the go-micro client code, which uses service resolution instead of connecting directly to an address.

However if you run this, this won't work. This is because we're running our service in a Docker container now, which has its own , separate to the host mdns we are currently using. The easiest way to fix this is to ensure both service and client are running in "dockerland", so that they are both running on the same host, and using the same network layer. So let's create a Makefile consignment-cli/Makefile, and create some entries.

  1. build:
  2. GOOS=linux GOARCH=amd64 go build
  3. docker build -t shippy-cli-consignment .
  4. run:
  5. docker run shippy-cli-consignment

Similar to before, we want to build our binary for Linux. When we run our docker image, we want to pass in an environment variable to instruct go-micro to use mdns.

Now let's create a Dockerfile for our CLI tool:

  1. FROM alpine:latest
  2. RUN mkdir -p /app
  3. WORKDIR /app
  4. ADD consignment.json /app/consignment.json
  5. ADD consignment-cli /app/consignment-cli
  6. CMD ["./shippy-cli-consignment"]

This is very similar to our services Dockerfile, except it also pulls in our json data file as well.

Earlier, I mentioned that those of you using Linux should switch to use the Debian base image. Now seems like a good time to take a look at a new feature from Docker: Multi-stage builds. This allows us to use multiple Docker images in one Dockerfile.

This is useful in our case especially, as we can use one image to build our binary, with all the correct dependencies etc, then use the second image to run it. Let's try this out, I will leave detailed comments along-side the code:

I will now go through our other Dockerfiles and apply this new approach. Oh, and remember to remove $ go build from your Makefiles!

Vessel service

Let's create a second service. We have a consignment service, this will deal with matching a consignment of containers to a vessel which is best suited to that consignment. In order to match our consignment, we need to send the weight and amount of containers to our new vessel service, which will then find a vessel capable of handling that consignment.

Create a new directory in your root directory $ mkdir vessel-service, now created a sub-directory for our new services protobuf definition, $ mkdir -p shippy-service-vessel/proto/vessel. Now let's create a new protobuf file, $ touch shippy-service-vessel/proto/vessel/vessel.proto.

Since the protobuf definition is really the core of our domain design, let's start there.

  1. // shippy-service-vessel/proto/vessel/vessel.proto
  2. syntax = "proto3";
  3. package vessel;
  4. service VesselService {
  5. rpc FindAvailable(Specification) returns (Response) {}
  6. }
  7. message Vessel {
  8. string id = 1;
  9. int32 capacity = 2;
  10. int32 max_weight = 3;
  11. bool available = 5;
  12. string owner_id = 6;
  13. }
  14. message Specification {
  15. int32 capacity = 1;
  16. int32 max_weight = 2;
  17. }
  18. message Response {
  19. Vessel vessel = 1;
  20. repeated Vessel vessels = 2;
  21. }

As you can see, this is very similar to our first service. We create a service, with a single rpc method called FindAvailable. This takes a Specification type and returns a Response type. The Response type returns either a Vessel type or multiple Vessels, using the repeated field.

Now we need to create a Makefile to handle our build logic and our run script. $ touch shippy-service-vessel/Makefile. Open that file and add the following:

  1. // vessel-service/Makefile
  2. build:
  3. protoc -I. --go_out=plugins=micro:. \
  4. proto/vessel/vessel.proto
  5. docker build -t shippy-service-vessel .
  6. run:
  7. docker run -p 50052:50051 -e MICRO_SERVER_ADDRESS=:50051 shippy-service-vessel

This is almost identical to the first Makefile we created for our consignment-service, however notice the service names and the ports have changed a little. We can't run two docker containers on the same port, so we make use of Dockers port forwarding here to ensure this service forwards 50051 to 50052 on the host network.

Now we need a Dockerfile, using our new multi-stage format:

  1. # vessel-service/Dockerfile
  2. FROM golang:alpine as builder
  3. RUN apk --no-cache add git
  4. WORKDIR /app/shippy-service-vessel
  5. COPY . .
  6. RUN go mod download
  7. RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-vessel
  8. FROM alpine:latest
  9. RUN apk --no-cache add ca-certificates
  10. RUN mkdir /app
  11. WORKDIR /app
  12. COPY --from=builder /app/shippy-service-vessel .
  13. CMD ["./shippy-service-vessel"]

Finally, we can start on our implementation:

  1. // vessel-service/main.go
  2. package main
  3. import (
  4. "context"
  5. "errors"
  6. "fmt"
  7. pb "github.com/EwanValentine/shippy/vessel-service/proto/vessel"
  8. "github.com/micro/go-micro"
  9. type Repository interface {
  10. FindAvailable(*pb.Specification) (*pb.Vessel, error)
  11. }
  12. vessels []*pb.Vessel
  13. }
  14. // FindAvailable - checks a specification against a map of vessels,
  15. // if capacity and max weight are below a vessels capacity and max weight,
  16. // then return that vessel.
  17. func (repo *VesselRepository) FindAvailable(spec *pb.Specification) (*pb.Vessel, error) {
  18. for _, vessel := range repo.vessels {
  19. if spec.Capacity <= vessel.Capacity && spec.MaxWeight <= vessel.MaxWeight {
  20. return vessel, nil
  21. }
  22. }
  23. return nil, errors.New("No vessel found by that spec")
  24. }
  25. // Our grpc service handler
  26. type service struct {
  27. repo repository
  28. }
  29. func (s *service) FindAvailable(ctx context.Context, req *pb.Specification, res *pb.Response) error {
  30. // Find the next available vessel
  31. vessel, err := s.repo.FindAvailable(req)
  32. if err != nil {
  33. return err
  34. }
  35. // Set the vessel as part of the response message type
  36. res.Vessel = vessel
  37. return nil
  38. }
  39. func main() {
  40. vessels := []*pb.Vessel{
  41. &pb.Vessel{Id: "vessel001", Name: "Boaty McBoatface", MaxWeight: 200000, Capacity: 500},
  42. }
  43. repo := &VesselRepository{vessels}
  44. srv := micro.NewService(
  45. micro.Name("shippy.service.vessel"),
  46. )
  47. srv.Init()
  48. // Register our implementation with
  49. pb.RegisterVesselServiceHandler(srv.Server(), &service{repo})
  50. if err := srv.Run(); err != nil {
  51. fmt.Println(err)
  52. }
  53. }

Now let's get to the interesting part. When we create a consignment, we need to alter our consignment service to call our new vessel service, find a vessel, and update the vessel_id in the created consignment:

Here we've created a client instance for our vessel service, this allows us to use the service name, i.e shipy.service.vessel to call the vessel service as a client and interact with its methods. In this case, just the one method (FindAvailable). We send our consignment weight, along with the amount of containers we want to ship as a specification to the vessel-service. Which then returns an appropriate vessel.

Update the consignment-cli/consignment.json file, remove the hardcoded vessel_id, we want to confirm our own is working. And let's add a few more containers and up the weight. For example:

  1. {
  2. "description": "This is a test consignment",
  3. "weight": 55000,
  4. "containers": [
  5. { "customer_id": "cust001", "user_id": "user001", "origin": "Manchester, United Kingdom" },
  6. { "customer_id": "cust002", "user_id": "user001", "origin": "Derby, United Kingdom" },
  7. { "customer_id": "cust005", "user_id": "user001", "origin": "Sheffield, United Kingdom" }
  8. ]

Now run $ make build && make run in . You should see a response, with a list of created consignments. In your consignments, you should now see a vessel_id has been set.

So there we have it, two inter-connected microservices and a command line interface! The next part in the series, we will look at persisting some of this data using MongoDB. We will also add in a third service, and use docker-compose to manage our growing ecosystem of containers locally.

Check out the for the full example.

For repos here:shippy-service-consignmentshippy-cli-consignment

As ever, any feedback, please send it over to (). Much appreciated!

If you are finding this series useful, and you use an ad-blocker (who can blame you). Please consider chucking me a couple of quid for my time and effort. Cheers! https://monzo.me/ewanvalentine

Or, sponsor me on to support more content like this.

Accolades: Docker Newsletter (22nd November 2017).