This reference doc is an in depth guide for the technical details and usage of Micro



Micro is a platform for cloud native development. It consists of a server, command line interface and service framework which enables you to build, run, manage and consume Micro services. This reference walks through the majority of Micro in depth and attempts to help guide you through any usage. It should be thought of much like a language spec and will evolve over time.


Below are the instructions for installing micro locally, in docker or on kubernetes


Micro can be installed locally in the following way. We assume for the most part a Linux env with Go and Git installed.

Go Get

go get github.com/micro/micro/v3


docker pull ghcr.io/micro/micro:latest

Release Binaries

# MacOS
curl -fsSL https://raw.githubusercontent.com/micro/micro/master/scripts/install.sh | /bin/bash

# Linux
wget -q  https://raw.githubusercontent.com/micro/micro/master/scripts/install.sh -O - | /bin/bash

# Windows
powershell -Command "iwr -useb https://raw.githubusercontent.com/micro/micro/master/scripts/install.ps1 | iex"


The micro server is a distributed systems runtime for the Cloud and beyond. It provides the building blocks for distributed systems development as a set of services, command line and service framework. The server is much like a distributed operating system in the sense that each component runs independent of each other but work together as one system. This composition allows us to use a microservices architecture pattern even for the platform.


The server provides the below functionality as built in primitives for services development.


To start the server simply run

micro server

In docker

sudo docker run -p 8080:8080 -p 8081:8081 ghcr.io/micro/micro:latest server

This will boot the entire system and services including a http api on :8080 and grpc proxy on :8081


Run the following command to check help output

micro --help


Environments define where the server is running, by default this should be local

Check with the following command

micro env

To set the environment do

micro env set local


Before starting login using the default user admin with password micro

micro login


Here’s a quick list of useful commands

micro --help	# execute help to list commands
micro env	# show the environment config
micro login	# login to the server
micro services	# check what's running
micro status	# check service status

Start helloworld

Run helloworld and check its status

# check env is set to local
micro env
# run the helloworld service
micro run github.com/micro/services/helloworld
# check the service status to see it's running
micro status
# once running should be listed in services
micro services

Call the service and verify output

$ micro helloworld call --name=Alice
        "msg": "Hello Alice"

Curl it from the API

curl "http://localhost:8080/helloworld/Call?name=Alice"

Stop the service

micro kill helloworld

Command Line

The command line interface is the primary way to interact with a micro server. It’s a simple binary that can either be interacted with using simple commands or an interactive prompt. The CLI proxies all commands as RPC calls to the Micro server. In many of the builtin commands it will perform formatting and additional syntactic work.

Builtin Commands

Built in commands are system or configuration level commands for interacting with the server or changing user config. For the most part this is syntactic sugar for user convenience. Here’s a subset of well known commands.


The micro binary and each subcommand has a –help flag to provide a usage guide. The majority should be obvious to the user. We will go through a few in more detail.


Login authenticates the user and stores credentials locally in a .micro/tokens file. This calls the micro auth service to authenticate the user against existing accounts stored in the system. Login asks for a username and password at the prompt.

Dynamic Commands

When issuing a command to the Micro CLI (ie. micro command), if the command is not a builtin, Micro will try to dynamically resolve this command and call a service running. Let’s take the micro registry command, because although the registry is a core service that’s running by default on a local Micro setup, the registry command is not a builtin one.

With the --help flag, we can get information about available subcommands and flags

$ micro registry --help
	micro registry


	micro registry [command]


The commands listed are endpoints of the registry service (see micro services).

To see the flags (which are essentially endpoint request parameters) for a subcommand:

$ micro registry getService --help
	micro registry getService

	micro registry getService [flags]

	--service string
	--options_ttl int64
	--options_domain string

At this point it is useful to have a look at the proto of the registry service here.

In particular, let’s see the GetService endpoint definition to understand how request parameters map to flags:

message Options {
	int64 ttl = 1;
	string domain = 2;

message GetRequest {
	string service = 1;
	Options options = 2;

As the above definition tells us, the request of GetService has the field service at the top level, and fields ttl and domain in an options structure. The dynamic CLI maps the underscored flagnames (ie. options_domain) to request fields, so the following request JSON:

    "service": "serviceName",
    "options": {
        "domain": "domainExample"

is equivalent to the following flags:

micro registry getService --service=serviceName --options_domain=domainExample

User Config

The command line uses local user config stores in ~/.micro for any form of state such as saved environments, tokens, etc. It will always attempt to read from here unless specified otherwise. Currently we store all config in a single file config.json and any auth tokens in a tokens file.


Micro is built with a federated and multi-environment model in mind. Our development normally maps through local, staging and production, so Micro takes this forward looking view and builds in the notion of environments which are completely isolated micro environments you can interact with through the CLI. This reference explains environments.

View Current

Environments can be displayed using the micro env command.

$ micro env
* local
  dev        proxy.m3o.dev
  platform   proxy.m3o.com

There are three builtin environments, local being the default, and two m3o specific offerings; dev and platform. These exist for convenience and speed of development. Additional environments can be created using micro env add [name] [host:port]. Environment addresses point to the micro proxy which defaults to :8081.

Add Environment

The command micro env --help provides a summary of usage. Here’s an example of how to add an environment.

$ micro env add foobar example.com
$ micro env
* local
  dev        proxy.m3o.dev
  platform   proxy.m3o.com
  foobar     example.com

Set Environment

The * marks which environment is selected. Let’s select the newly added:

$ micro env set foobar
$ micro env
  dev        proxy.m3o.dev
  platform   proxy.m3o.com
* foobar     example.com

Login to an Environment

Each environment is effectively an isolated deployment with its own authentication, storage, etc. So each env requires signup and login. At this point we have to log in to the example env with micro login. If you don’t have credentials to the environment, you have to ask the admin.

Web Dashboard

View and query services in a web browser at localhost:8082. The web dashboard is a simple layer on top of the system to visualise services and their endpoints. Additionally it generates dynamic forms for easy querying.

Run the dashboard with the command

micro web


Micro is built as a distributed operating system leveraging the microservices architecture pattern.


Below we describe the list of services provided by the Micro Server. Each service is considered a building block primitive for a platform and distributed systems development. The proto interfaces for each can be found in micro/proto/auth and the Go library, client and server implementations in micro/service/auth.


The API service is a http API gateway which acts as a public entrypoint and converts http/json to RPC.


The micro API is the public entrypoint for all external access to services to be consumed by frontend, mobile, etc. The api accepts http/json requests and uses path based routing to resolve to backend services. It converts the request to gRPC and forward appropriately. The idea here is to focus on microservices on the backend and stitch everything together as a single API for the frontend.


In the default local environment the API address is Each service running is callable through this API.

$ curl
{"version": "v3.0.0-beta"}

An example call would be listing services in the registry:

$ curl

The format is


The endpoint name is lower camelcase.

The parameters can be passed on as query params

$ curl
{"msg":"Hello Joe"}

or JSON body:

curl -XPOST --header "Content-Type: application/json" -d '{"name":"Joe"}'
{"msg":"Hello Joe"}

To specify a namespace when calling the API, the Micro-Namespace header can be used:

$ curl -H "Micro-Namespace: foobar"

To call a non-public service/endpoint, the Authorization header can be used:

MICRO_API_TOKEN=`micro user token`
curl -H "Authorization: Bearer $MICRO_API_TOKEN"


The auth service provides both authentication and authorization.


The auth service stores accounts and access rules. It provides the single source of truth for all authentication and authorization within the Micro runtime. Every service and user requires an account to operate. When a service is started by the runtime an account is generated for it. Core services and services run by Micro load rules periodically and manage the access to their resources on a per request basis.


For CLI command help use micro auth --help or auth subcommand help such as micro auth create --help.


To login to a server simply so the following

$ micro login
Enter username: admin
Enter password: 
Successfully logged in.

Assuming you are pointing to the right environment. It defaults to the local micro server.


Rules determine what resource a user can access. The default rule is the following:

$ micro auth list rules
ID          Scope           Access      Resource        Priority
default     <public>        GRANTED     *:*:*           0

The default rule makes all services callable that appear in the micro status output. Let’s see an example of this.

$ micro run helloworld
# Wait for the service to accept calls
$ curl
{"msg":"Hello Alice"}

If we want to prevent unauthorized users from calling our services, we can create the following rule

# This command creates a rule that enables only logged in users to call the micro server
micro auth create rule  --access=granted --scope='*' --resource="*:*:*" onlyloggedin
# Create the rule which allows us to login
micro auth create rule --access=granted --resource="service:auth:*" auth-public

and delete the default one. Here, the scope * is markedly different from the <public> scope we have seen earlier when doing a micro auth list rules:

$ micro auth list rules
ID            Scope         Access       Resource       Priority
auth-public   <public>      GRANTED      service:auth:* 0
onlyloggedin  *             GRANTED      *:*:*          0
default       <public>      GRANTED      *:*:*          0

Now, let’s remove the default rule.

# This command deletes the 'default' rule - the rule which enables anyone to call the 'micro server'.
$ micro auth delete rule default
Rule deleted

Let’s try curling our service again:

$ curl
{"Id":"helloworld","Code":401,"Detail":"Unauthorized call made to helloworld:Helloworld.Call","Status":"Unauthorized"}

Our onlyloggedin rule took effect. We can still call the service with a token:

$ token=$(micro user token)
# Locally:
# curl "Authorization: Bearer $token"
{"msg":"Hello Alice"}

(Please note tokens have a limited lifetime so the line $ token=$(micro user token) has to be reissued from time to time, or the command must be used inline.)


Auth service supports the concept of accounts. The default account used to access the micro server is the admin account.

$ micro auth list accounts
ID		Name		Scopes		Metadata
admin		admin		admin		n/a

We can create accounts for teammates and coworkers with micro auth create account:

$ micro auth create account --scopes=admin jane
Account created: {"id":"jane","type":"","issuer":"micro","metadata":null,"scopes":["admin"],"secret":"bb7c1a96-c0c6-4ff5-a0e9-13d456f3db0a","name":"jane"}

The freshly created account can be used with micro login by using the jane id and bb7c1a96-c0c6-4ff5-a0e9-13d456f3db0a password.


The broker is a message broker for asynchronous pubsub messaging.


The broker provides a simple abstraction for pubsub messaging. It focuses on simple semantics for fire-and-forget asynchronous communication. The goal here is to provide a pattern for async notifications where some update or events occurred but that does not require persistence. The client and server build in the ability to publish on one side and subscribe on the other. The broker provides no message ordering guarantees.

While a Service is normally called by name, messaging focuses on Topics that can have multiple publishers and subscribers. The broker is abstracting away in the service’s client/server which includes message encoding/decoding so you don’t have to spend all your time marshalling.


The client contains the Publish method which takes a proto message, encodes it and publishes onto the broker on a given topic. It takes the metadata from the client context and includes these as headers in the message including the content-type so the subscribe side knows how to deal with it.


The server supports a Subscribe method which allows you to register a handler as you would for handling requests. In this way we can mirror the handler behaviour and deserialize the message when consuming from the broker. In this model the server handles connecting to the broker, subscribing, consuming and executing your subscriber function.



bytes, err := json.Marshal(&Healthcheck{
	Healthy: true,
	Service: "foo",
if err != nil {
	return err

return broker.Publish("health", &broker.Message{Body: bytes})


handler := func(msg *broker.Message) error {
	var hc Healthcheck
	if err := json.Unmarshal(msg.Body, &hc); err != nil {
		return err
	if hc.Healthy {
		logger.Infof("Service %v is healthy", hc.Service)
	} else {
		logger.Infof("Service %v is not healthy", hc.Service)

	return nil

sub, err := broker.Subscribe("health", handler)
if err != nil {
	return err


The config service provides dynamic configuration for services.


Config can be stored and loaded separately to the application itself for configuring business logic, api keys, etc. We read and write these as key-value pairs which also support nesting of JSON values. The config interface also supports storing secrets by defining the secret key as an option at the time of writing the value.


Let’s assume we have a service called helloworld from which we want to read configuration data. First we have to insert said data with the cli. Config data can be organized under different “paths” with the dot notation. It’s a good convention to save all config data belonging to a service under a top level path segment matching the service name:

$ micro config set helloworld.somekey hello
$ micro config get helloworld.somekey

We can save another key too and read all values in one go with the dot notation:

$ micro config set helloworld.someotherkey "Hi there!"
$ micro config get helloworld
{"somekey":"hello","someotherkey":"Hi there!"}

As it can be seen, the config (by default) stores configuration data as JSONs. We can save any type:

$ micro config set helloworld.someboolkey true
$ micro config get helloworld.someboolkey
$ micro config get helloworld
{"someboolkey":true,"somekey":"hello","someotherkey":"Hi there!"}

So far we have only saved top level keys. Let’s explore the advantages of the dot notation.

$ micro config set helloworld.keywithsubs.subkey1 "So easy!"
{"keywithsubs":{"subkey1":"So easy!"},"someboolkey":true,"somekey":"hello","someotherkey":"Hi there!"}

Some of the example keys are getting in our way, let’s learn how to delete:

$ micro config del helloworld.someotherkey
$ micro config get helloworld
{"keywithsubs":{"subkey1":"So easy!"},"someboolkey":true,"somekey":"hello"}

We can of course delete not just leaf level keys, but top level ones too:

$ micro config del helloworld.keywithsubs
$ micro config get helloworld

The config also supports secrets - values encrypted at rest. This helps in case of leaks, be it a security one or an accidental copy paste.

They are fairly easy to save:

$ micro config set --secret helloworld.hushkey "Very secret stuff" 
$ micro config get helloworld.hushkey

$ micro config get --secret helloworld.hushkey
Very secret stuff

$ micro config get helloworld

$ micro config get --secret helloworld
{"hushkey":"Very secret stuff","someboolkey":true,"somekey":"hello"}

Even bool or number values can be saved as secrets, and they will appear as the string constant [secret] unless decrypted:

$ micro config set --secret helloworld.hush_number_key 42
$ micro config get helloworld

$ micro config get --secret helloworld
{"hush_number_key":42,"hushkey":"Very secret stuff","someboolkey":true,"somekey":"hello"}

Service Framework

It is similarly easy to access and set config values from a service. A good example of reading values is the config example test service:

package main

import (


type keyConfig struct {
	Subkey  string `json:"subkey"`
	Subkey1 int    `json:"subkey1"`

type conf struct {
	Key keyConfig `json:"key"`

func main() {
	go func() {
		for {
			val, err := config.Get("key.subkey")
			fmt.Println("Value of key.subkey: ", val.String(""), err)

			val, err = config.Get("key", config.Secret(true))
			if err != nil {
			c := conf{}
			err = val.Scan(&c.Key)
			fmt.Println("Value of key.subkey1: ", c.Key.Subkey1, err)

	// run the service

The above service will print the value of key.subkey and key.subkey every second. By passing in the config.Secret(true) option, we tell config to decrypt secret values for us, similarly to the --secret CLI flag.

The config interface specifies not just Get Set and Delete to access values, but a few convenience functions too in the Value interface.

It is worth noting that String Int etc methods will do a best effort try at coercing types, ie. if the value saved is a string, Int will try to parse it. However, the same does not apply to the Scan method, which uses json.Unmarshal under the hood, which we all know fails when encountering type mismatches.

Get should, in all cases, return a non nil Value, so even if the Get errors, Value.Int() and other operations should never panic.

Advanced Concepts

Merging Config Values

When saving a string with the CLI that is a valid JSON map, it gets expanded to be saved as a proper map structure, instead of a string, ie

$ micro config set helloworld '{"a": "val1", "b": "val2"}'
$ micro config get helloworld.a
# If the string would be saved as is, `helloworld.a` would be a nonexistent path

The advantages of this become particularly visible when Setting a complex type with the library:

type conf struct {
	A string `json:"a"`
	B string `json:"b"`

c1 := conf{"val1", "val2"}
config.Set("key", c1)

v, _ := config.Get("key")
c2 := &conf{}
// c1 and c2 should be equal

Or with the following example

$ micro config del helloworld
$ micro config set helloworld '{"a":1}'
$ micro config get helloworld
$ micro config set helloworld '{"b":2}'
$ micro config get helloworld

Secret encryption keys for micro server

By default, if not specified, micro server generates and saves an encryption key to the location ~/.micro/config_secret_key. This is intended for local zero dependency use, but not for production.

To specify the secret for the micro server either the env var MICRO_CONFIG_SECRET_KEY or the flag config_secret_key key must be specified.


The errors package provides error types for most common HTTP status codes, e.g. BadRequest, InternalServerError etc. It’s recommended when returning an error to an RPC handler, one of these errors is used. If any other type of error is returned, it’s treated as an InternalServerError.

Micro API detects these error types and will use them to determine the response status code. For example, if your handler returns errors.BadRequest, the API will return a 400 status code. If no error is returned the API will return the default 200 status code.

Error codes are also used when handling retries. If your service returns a 500 (InternalServerError) or 408 (Timeout) then the client will retry the request. Other status codes are treated as client error and won’t be retried.


import (

func (u *Users) Read(ctx context.Context, req *pb.ReadRequest, rsp *pb.ReadResponse) error {
	if len(req.Id) == 0 {
		return errors.BadRequest("users.Read.MissingID", "Missing ID")



The events service is a service for event streaming and persistent storage of events.


Event streaming differs from pubsub messaging in that it provides an ordered stream of events that can be consumed or replayed from any given point in the past. If you have experience with Kafka then you know it’s basically a distributed log which allows you to read a file from different offsets and stream it.

The event service and interface provide the event streaming abstraction for writing and reading events along with consuming from any given offset. It also supports acking and error handling where appropriate.

Events also different from the broker in that it provides a fixed Event type where you fill in the details and handle the decoding of the message body yourself. Events could have large payloads so we don’t want to unnecessarily decode where you may just want to hand off to a storage system.


The events package has two parts: Stream and Store. Stream is used to Publish and Consume to messages for a given topic. For example, in a chat application one user would Publish a message and another would subscribe. If you later needed to retrieve messages, you could either replay them using the Subscribe function and passing the Offset option, or list them using the Read function.

func Publish(topic string, msg interface{}, opts ...PublishOption) error 

The Publish function has two required arguments: topic and message. Topic is the channel you’re publishing the event to, in the case of a chat application this would be the chat id. The message is any struct, e.g. the message being sent to the chat. When the subscriber receives the event they’ll be able to unmarshal this object. Publish has two supported options, WithMetadata to pass key/value pairs and WithTimestamp to override the default timestamp on the event.

func Consume(topic string, opts ...ConsumeOption) (<-chan Event, error)

The Consume function is used to consume events. In the case of a chat application, the client would pass the chat ID as the topic, and any events published to the stream will be sent to the event channel. Event has an Unmarshal function which can be used to access the message payload, as demonstrated below:

for {
	evChan, err := events.Consume(chatID)
	if err != nil {
		logger.Error("Error subscribing to topic %v: %v", chatID, err)
		return err
	for {
		ev, ok := <- evChan
		if !ok {
		var msg Message
		if err :=ev.Unmarshal(&msg); err != nil {
			logger.Errorf("Error unmarshaling event %v: %v", ev.ID, err)
			return err
		logger.Infof("Received message: %v", msg.Subject)


The Chat Service examples usage of the events service, leveraging both the stream and store functions.


The network is a service to service network for request proxying


The network provides a service to service networking abstraction that includes proxying, authentication, tenancy isolation and makes use of the existing service discovery and routing system. The goal here is not to provide service mesh but a higher level control plane for routing that can govern access based on the existing system. The network requires every service to be pointed to it, making an explicit choice for routing.

Beneath the covers cilium, envoy and other service mesh tools can be used to provide a highly resilient mesh.


The registry is a service directory and endpoint explorer


The service registry provides a single source of truth for all services and their APIs. All services on startup will register their name, version, address and endpoints with the registry service. They then periodically re-register to “heartbeat”, otherwise being expired based on a pre-defined TTL of 90 seconds.

The goal of the registry is to allow the user to explore APIs and services within a running system.

The simplest form of access is the below command to list services.

micro services


The get service endpoint returns information about a service including response parameters parameters for endpoints:

$ micro registry getService --service=helloworld
	"services": [
			"name": "helloworld",
			"version": "latest",
			"metadata": {
				"domain": "micro"
			"endpoints": [
					"name": "Helloworld.Call",
					"request": {
						"name": "CallRequest",
						"type": "CallRequest",
						"values": [
								"name": "name",
								"type": "string",
								"values": []
					"response": {
						"name": "CallResponse",
						"type": "CallResponse",
						"values": [
								"name": "message",
								"type": "string",
								"values": []
					"metadata": {}
					"name": "Helloworld.Stream",
					"request": {
						"name": "Context",
						"type": "Context",
						"values": []
					"response": {
						"name": "Stream",
						"type": "Stream",
						"values": []
					"metadata": {
						"stream": "true"
			"nodes": [
					"id": "helloworld-3a0d02be-f98e-4d9d-a8fa-24e942580848",
					"address": "",
					"port": "0",
					"metadata": {
						"broker": "service",
						"protocol": "grpc",
						"registry": "service",
						"server": "grpc",
						"transport": "grpc"
			"options": {
				"ttl": "0",
				"domain": ""

This is an especially useful feature for writing custom meta tools like API explorers.



The runtime service is responsible for running, updating and deleting binaries or containers (depending on the platform - eg. binaries locally, pods on k8s etc) and their logs.

Running a service

The micro run command tells the runtime to run a service. The following are all valid examples:

micro run github.com/micro/services/helloworld
micro run .  # deploy local folder to your local micro server
micro run ../path/to/folder # deploy local folder to your local micro server
micro run helloworld # deploy latest version, translates to micro run github.com/micro/services/helloworld or your custom base url
micro run helloworld@9342934e6180 # deploy certain version
micro run helloworld@branchname  # deploy certain branch
micro run --name helloworld .

Specifying Service Name

The service name is derived from the directory name of your application. In case you want to override this specify the --name flag.

micro run --name helloworld github.com/myorg/helloworld/server

Running a local folder

If the first parameter is an existing local folder, ie

micro run ./foobar

Then the CLI will upload that folder to the runtime and the runtime runs that.

Running a git source

If the first parameter to micro run points to a git repository (be it on GitHub, GitLab, Bitbucket or any other provider), then the address gets sent to the runtime and the runtime downloads the code and runs it.

Using references

References are the part of the first parameter passed to run after the @ sign. It can either be a branch name (no reference means version latest which equals to master in git terminology) or a commit hash.

When branch names are passed in, the latest commit of the code will run.

Listing runtime objects

The micro status command lists all things running in the runtime:

$ micro status
helloworld	latest	github.com/micro/services/helloworld	running	n/a	20h43m45s ago	owner=admin, group=micro

The output includes the error if there is one. Commands like micro kill, micro logs, micro update accept the name returned by the micro status as the first parameter (and not the service name as that might differ).

Updating a service

The micro update command makes the runtime pull the latest commit in the branch and restarts the service.

In case of local code it requires not the runtime name (returned by micro status) but the local path. For commit hash deploys it just restarts the service.

Examples: micro update helloworld, micro update helloworld@branch, micro update helloworld@commit, micro update ./helloworld.

Deleting a service

The micro kill command removes a runtime object from the runtime. It accepts the name returned by micro status.

Examples: micro kill helloworld.


The micro logs command shows logs for a runtime object. It accepts the name returned by micro status.

The -f flag makes the command stream logs continuously.

Examples: micro logs helloworld, micro logs -f helloworld.


Micro’s store interface is for persistent key-value storage.

For a good beginner level doc on the Store, please see the Getting started tutorial.


Key-value stores that support ordering of keys can be used to build complex applications. Due to their very limited feature set, key-value stores generally scale easily and reliably, often linearly with the number of nodes added.

This scalability comes at the expense of inconvenience and mental overhead when writing business logic. For use cases where linear scalability is important, this trade-off is preferred.

Query by ID

Reading by ID is the archetypal job for key value stores. Storing data to enable this ID works just like in any other database:

# entries designed for querying "users by id"
KEY         VALUE
id1         {"id":"id1", "name":"Jane", "class":"firstGrade",   "avgScore": 98}
id2         {"id":"id2", "name":"Alice","class":"secondGrade",  "avgScore": 92}
id3         {"id":"id3", "name":"Joe",  "class":"secondGrade"   "avgScore": 89}
id4         {"id":"id4", "name":"Betty","class":"thirdGrade"    "avgScore": 94}
import "github.com/micro/micro/v3/service/store"

records, err := store.Read("id1")
if err != nil {
	fmt.Println("Error reading from store: ", err)
// Will output {"id":"id1", "name":"Jane", "class":"firstGrade",   "avgScore": 98}

Given this data structure, we can do two queries:

Finding values in an ordered set is possibly the simplest task we can ask a database. The problem with the above data structure is that it’s not very useful to ask “find me keys coming in the order after “id2”. To enable other kinds of queries, the data must be saved with different keys.

In the case of the schoold students, let’s say we wan’t to list by class. To do this, having the query in mind, we can copy the data over to another table named after the query we want to do:

Query by Field Value Equality

# entries designed for querying "users by class"
KEY             VALUE
firstGrade/id1  {"id":"id1", "name":"Jane", "class":"firstGrade",   "avgScore": 98}
secondGrade/id2 {"id":"id2", "name":"Alice","class":"secondGrade",  "avgScore": 92}
secondGrade/id3 {"id":"id3", "name":"Joe",  "class":"secondGrade"   "avgScore": 89}
thirdGrade/id4  {"id":"id4", "name":"Betty","class":"thirdGrade"    "avgScore": 94}
import "github.com/micro/micro/v3/service/store"

records, err := store.Read("", store.Prefix("secondGrade"))
if err != nil {
	fmt.Println("Error reading from store: ", err)
// Will output
// secondGrade/id2 {"id":"id2", "name":"Alice","class":"secondGrade",  "avgScore": 92}
// secondGrade/id3 {"id":"id3", "name":"Joe",  "class":"secondGrade"   "avgScore": 89}

Since the keys are ordered it is very trivial to get back let’s say “all second graders”. Key value stores which have their keys ordered support something similar to “key starts with/key has prefix” query. In the case of second graders, listing all records where the “keys start with secondGrade” will give us back all the second graders.

This query is basically a field equals to as we essentially did a field class == secondGrade. But we could also exploit the ordered nature of the keys to do a value comparison query, ie field avgScores is less than 90 or field AvgScores is between 90 and 95 etc., if we model our data appropriately:

Query for Field Value Ranges

# entries designed for querying "users by avgScore"
KEY         VALUE
089/id3     {"id":"id3", "name":"Joe",  "class":"secondGrade"   "avgScore": 89}
092/id2     {"id":"id2", "name":"Alice","class":"secondGrade",  "avgScore": 92}
094/id4     {"id":"id4", "name":"Betty","class":"thirdGrade"    "avgScore": 94}
098/id1     {"id":"id1", "name":"Jane", "class":"firstGrade",   "avgScore": 98}

It’s worth remembering that the keys are strings, and that they are ordered lexicographically. For this reason when dealing with numbering values, we must make sure that they are prepended to the same length appropriately.

At the moment Micro’s store does not support this kind of query, this example is only here to hint at future possibilities with the store.

Tables Usage

Micro services only have access to one Store table. This means all keys live in the same namespace and can collide. A very useful pattern is to separate the entries by their intended query pattern, ie taking the “users by id” and users by class records above:

KEY         VALUE
# entries designed for querying "users by id"
usersById/id1         		{"id":"id1", "name":"Jane", "class":"firstGrade",   "avgScore": 98}
usersById/id2         		{"id":"id2", "name":"Alice","class":"secondGrade",  "avgScore": 92}
usersById/id3         		{"id":"id3", "name":"Joe",  "class":"secondGrade"   "avgScore": 89}
usersById/id4         		{"id":"id4", "name":"Betty","class":"thirdGrade"    "avgScore": 94}
# entries designed for querying "users by class"
usersByClass/firstGrade/id1  {"id":"id1", "name":"Jane", "class":"firstGrade",   "avgScore": 98}
usersByClass/secondGrade/id2 {"id":"id2", "name":"Alice","class":"secondGrade",  "avgScore": 92}
usersByClass/secondGrade/id3 {"id":"id3", "name":"Joe",  "class":"secondGrade"   "avgScore": 89}
usersByClass/thirdGrade/id4  {"id":"id4", "name":"Betty","class":"thirdGrade"    "avgScore": 94}

Respective go examples this way become:

import "github.com/micro/micro/v3/service/store"

const idPrefix = "usersById/"

records, err := store.Read(idPrefix + "id1")
if err != nil {
	fmt.Println("Error reading from store: ", err)
// Will output {"id":"id1", "name":"Jane", "class":"firstGrade",   "avgScore": 98}
import "github.com/micro/micro/v3/service/store"

const classPrefix = "usersByClass/"

records, err := store.Read("", store.Prefix(classPrefix + "secondGrade"))
if err != nil {
	fmt.Println("Error reading from store: ", err)
// Will output
// secondGrade/id2 {"id":"id2", "name":"Alice","class":"secondGrade",  "avgScore": 92}
// secondGrade/id3 {"id":"id3", "name":"Joe",  "class":"secondGrade"   "avgScore": 89}


Metadata / headers can be passed via the context in RPC calls. The context/metadata package allows services to get and set metadata in a context. The Micro API will add request headers into context, for example if the “Foobar” header is set on an API call to “localhost:8080/users/List”, the users service can access this value as follows:

import (


func (u *Users) List(ctx context.Context, req *pb.ListRequest, rsp *pb.ListResponse) error {
	val, ok := metadata.Get(ctx, "Foobar")
	if !ok {
		return fmt.Errorf("Missing Foobar header")

	fmt.Println("Foobar header was set to: %v", val)
	return nil

Likewise, clients can set metadata in context using the metadata.Set function as follows:

func (u *Users) List(ctx context.Context, req *pb.ListRequest, rsp *pb.ListResponse) error {
	newCtx := metadata.Set(ctx, "Foobar", "mycustomval")
	fRsp, err := u.foosrv.Call(newCtx, &foosrv.Request{})


Micro is a pluggable architecture built on Go’s interface types. Plugins enable swapping out underlying infrastructure.


Micro is pluggable, meaning the implementation for each module can be replaced depending on the requirements. Plugins are applied to the micro server and not to services directly, this is done so the underlying infrastructure can change with zero code changes required in your services.

An example of a pluggable interface is the store. Locally micro will use a filestore to persist data, this is great because it requires zero dependencies and still offers persistence between restarts. When running micro in a test suite, this could be swapped to an in-memory cache which is better suited as it offers consistency between runs. In production, this can be swapped out for standalone infrastructure such as cockroachdb or etcd depending on the requirement.

Let’s take an example where our service wants to load data from the store. Our service would call store.Read(userPrefix + userID) to load the value, behind the scenes this will execute an RPC to the store service which will in-turn call store.Read on the current DefaultStore implementation configured for the server.


Profiles are used to configure multiple plugins at once. Micro comes with a few profiles out the box, such as “local”, “kubernetes” and “test”. These profiles can be found in profile/profile.go. You can configure micro to use one of these profiles using the MICRO_PROFILE env var, for example: MICRO_PROFILE=test micro server. The default profile used is “local”.

Writing a profile

Profiles should be created as packages within the profile directory. Let’s create a “staging” profile by creating profile/staging/staging.go. The example below shows how to override the default store of Local profile to use an in-memory implementation:

// Package staging configures micro for a staging environment
package staging

import (


func init() {
	profile.Register("staging", staging)

var staging = &profile.Profile{
	Name: "staging",
	Setup: func(ctx *cli.Context) error {
		store.DefaultStore = memory.NewStore()
		return nil
pushd profile/staging
go mod init github.com/micro/micro/profile/staging
go mod tidy

Using a custom profile

You can load a custom profile using a couple of commands, the first adds a replace to your go mod, indicating it should look for your custom profile within the profile directory:

go mod edit -replace github.com/micro/micro/profile/staging/v3=./profile/staging
go mod tidy

The second command creates a profile.go file which imports your profile. When your profile is imported, the init() function which is defined in staging.go is called, registering your profile.

micro init --profile=staging --output=profile.go

Now you can start your server using this profile:

MICRO_PROFILE=staging go run . server