Sponsor me on to support more content like this.

    In the previous part in this series, we touched upon user authentication and JWT. In this episode, we'll take a quick look at the go-micro's broker functionality and even brokering.

    As mentioned in previous posts, go-micro is a pluggable framework, and it interfaces lots of different commonly used technologies. If you take a look at the , you'll see just how many it supports out of the box.

    In our case, we'll be using the NATS broker plugin.

    Event Driven Architecture

    Event driven architecture' is a very simple concept at its core. We generally consider good architecture to be decoupled; that services shouldn't be coupled to, or aware of other services. When we use protocols such as gRPC, that's in some case true, because we're saying I would like to publish this request to for example. Which uses service discovery to find the actual location of that service. Although that doesn't directly couple us to the implementation, it does couple that service to something else called go.srv.user-service, so it isn't completely decoupled, as it's talking directly to something else.

    So what makes event driven architecture truly decoupled? To understand this, let's first look at the process in publishing and subscribing to an event. Service a finished a task x, and then says to the ecosystem 'x has just happened'. It doesn't need to know about, or care about who is listening to that event, or what is affected by that event taking place. That responsibility is left to the listening clients.

    It's also easier if you're expecting n number of services to act upon a certain event. For example, if you wanted 12 different services to act upon a new user being created using gRPC, you would have to instantiate 12 clients within your user service. With pubsub, or event driven architecture, your service doesn't need to care about that.

    Now, a client service will simply listen to an event. This means that you need some kind of mediator in the middle to accept these events and inform the listening clients of their publication.

    In this post, we'll create an event every time we create a user, and we will create a new service for sending out emails. We won't actually write the email implementation, just mock it out for now.

    First we need to integrate the broker plug-in into our user-service:

    1. // shippy-user-service/handler.go
    2. const topic = "user.created"
    3. type service struct {
    4. repo Repository
    5. tokenService Authable
    6. PubSub broker.Broker
    7. }
    8. ...
    9. func (srv *service) Create(ctx context.Context, req *pb.User, res *pb.Response) error {
    10. // Generates a hashed version of our password
    11. hashedPass, err := bcrypt.GenerateFromPassword([]byte(req.Password), bcrypt.DefaultCost)
    12. if err != nil {
    13. return err
    14. }
    15. req.Password = string(hashedPass)
    16. if err := srv.repo.Create(req); err != nil {
    17. return err
    18. }
    19. res.User = req
    20. if err := srv.publishEvent(req); err != nil {
    21. return err
    22. }
    23. return nil
    24. }
    25. // Marshal to JSON string
    26. body, err := json.Marshal(user)
    27. if err != nil {
    28. }
    29. // Create a broker message
    30. msg := &broker.Message{
    31. Header: map[string]string{
    32. "id": user.Id,
    33. },
    34. Body: body,
    35. }
    36. // Publish message to broker
    37. if err := srv.PubSub.Publish(topic, msg); err != nil {
    38. log.Printf("[pub] failed: %v", err)
    39. }
    40. return nil
    41. }
    42. ...

    Make sure you're running Postgres and then let's run this service:

    1. $ docker run -d -p 5432:5432 postgres
    2. $ make build
    3. $ make run

    Now let's create our email service. I've created a new repo for this:

    Before running this, we'll need to run …

    1. $ docker run -d -p 4222:4222 nats

    Also, I'd like to quickly explain a part of go-micro that I feel is important in understanding how it works as a framework. You will notice:

    1. srv.Init()
    2. pubsub := srv.Server().Options().Broker

    Let's take a quick look at that. When we create a service in go-micro, srv.Init() behind the scenes will look for any configuration, such as any plugins and environment variables, or command line options. It will then instantiate these integrations as part of the service. In order to then use those instances, we need to fetch them out of the service instance. Within srv.Server().Options(), you will also find Transport and Registry.

    In our case here, it will find our GO_MICRO_BROKER environment variables, it will find the NATS broker plugin, and create an instance of that. Ready for us to connect to and use.

    If you're creating a command line tool, you'd use cmd.Init(), ensuring you're importing "github.com/micro/go-micro/cmd". That will have the same affect.

    Now build and run this service: $ make build && make run, ensuring you're running the user service as well. Then head over to our shippy-user-cli repo, and run $ make run, watching our email service output. You should see something like… 2017/12/26 23:57:23 Sending email to: Ewan Valentine.

    And that's it! This is a bit of a tenuous example, as our email service is implicitly listening to a single 'user.created' event. But hopefully you can see how this approach allows you to write decoupled services.

    It's worth mentioning that using JSON over NATS will have a performance overhead vs gRPC, as we're back into the realm of serialising json strings. But, for some use-cases, that's perfectly acceptable. NATS is incredibly efficient, and great for fire and forget events.

    If you're not using go-micro, there's a NATS go library(NATS itself is written in Go, so naturally support for Go is pretty solid).

    Publishing an event:

    Subscribing to an event:

    1. nc.Subscribe("user.created", func(m *nats.Msg) {
    2. user := convertUserString(m.Data)
    3. go sendEmail(user)
    4. })

    I mentioned earlier that when using a third party message broker, such as NATS, you lose the use of protobuf. Which is a shame, because we lose the ability to communicate using binary streams, which of course have a much lower overhead than serialised JSON strings. But like most concerns, go-micro has an answer to that problem as well.

    Built-in to go-micro is a pubsub layer, which sits on top of the broker layer, but without the need for a third party broker such as NATS. But the awesome part of this feature, is that it utilises your protobuf definitions. So we're back in the realms of low-latency binary streams. So let's update our user-service to replace the existing NATS broker, with go-micro's pubsub:

    1. // shippy-user-service/main.go
    2. func main() {
    3. ...
    4. publisher := micro.NewPublisher("user.created", srv.Client())
    5. // Register handler
    6. pb.RegisterUserServiceHandler(srv.Server(), &service{repo, tokenService, publisher})
    7. ...
    8. }

    Now our email service:

    1. // shippy-email-service
    2. const topic = "user.created"
    3. type Subscriber struct{}
    4. func (sub *Subscriber) Process(ctx context.Context, user *pb.User) error {
    5. log.Println("Picked up a new message")
    6. log.Println("Sending email to:", user.Name)
    7. return nil
    8. }
    9. func main() {
    10. ...
    11. micro.RegisterSubscriber(topic, srv.Server(), new(Subscriber))

    Now we're using our underlying User protobuf definition across our services, over gRPC, and using no third-party broker. Fantastic!

    That's a wrap! Next tutorial we'll look at creating a user-interface for our services, and look at how a web client can begin to interact with our services.

    Any bugs, mistakes, or feedback on this article, or anything you would find helpful, please .

    If you are finding this series useful, and you use an ad-blocker (who can blame you). Please consider chucking me a couple of quid for my time and effort. Cheers! https://monzo.me/ewanvalentine