Riak_Core with Elixir : Part Two
In previous post we created a bare bone app with riak core, Now it’s time to add some functionality.
Before we start adding functionality to our app, we need to understand the various component in riak_core that we will eventually interact with. These are :
- Riak_core ring
- Vnode
- Riak_core_vnode_master
- Riak_core_node_watcher
Riak_Core ring
riak_core ring is collection of virtual nodes (think of Vnode as a bucket where we store our data and business logic). Each server in a riak_core cluster has multiple vnodes and all of the vnodes in cluster makes the ring.
You can also think of riak_core ring as a cluster which stores the global data in a ring structure.The state information is transferred between nodes in the cluster in a controlled manner to keep all cluster members in sync.
Vnode
Next most important component of our application is Vnode. You can think of Vnode as a bucket where we store our data and business logic. Each riak_core app has a fixed number of vnodes that are distributed across all the instances of your app’s physical nodes (app servers), vnodes move from instance to instance when the number of instances change to balance the load and provide fault tolerance and scalability.
We will dive deep into Vnode and its implementation later, for now just think of Vnode as bucket to hold your data and business logic.
Riak_core_vnode_master
Riak_core_vnode_master process is responsible for handling all the incoming requests and forwarding them to vnodes. In our code we won’t be directly interacting with vnodes rather we would interact with Riak_core_vnode_master which in turn will forward our request to the Vnodes.
Riak_Core_Node_Watcher
Next one riak_core_node_watcher
is the process responsible for tracking the status of nodes within a riak_core cluster.
riak_core_node_watcher
also has the capability to take a node out of the cluster programmatically. This is useful in situations where a brief node outage is necessary but you don’t want to stop the server software completely.
riak_core_node_watch_events
cooperates with riak_core_node_watcher
to generate events based on node activity, i.e. joining or leaving the cluster, etc. Interested parties can register callback functions which will be called as events occur.
Initialization 1: Registering a new riak_core ring
Before we can perform any operation on raik_core in our application, we need to initialize the riak core ring.
riak_core ring has two methods of initialization, one is to use riak_core.register/1 method which takes a list of properties as input. At the very least, user has to specify a module name implementing Vnode behaviour.
Or else we can use riak_core.register/2 method for created a named riak_core app. This function takes a unique name (an atom) and a list of properties as input. At the very least, the user has to specify a unique name for the ring and a module implementing the vnode behaviour.
We will use riak_core.register/1 method for easy initialization.
Remember that the riak_core ring takes only one type of Vnode, this essentially means that you can’t put two different types of Vnode module in riak_core ring.
Now comes the question, where do we do this initialization in our project. This should be done in your vyuha.ex file. Change vyuha.ex file to add below line
:ok = :riak_core.register([{:vnode_module, Vyuha.Vnode}])
In above line we are telling riak_core to register a new ring where Vnode module is Vyuha.Vnode.
Complete Vyuha.ex file:
defmodule Vyuha do
use Applicationdef start(_type, _args) do
import Supervisor.Spec, warn: false
:ok = :riak_core.register([{:vnode_module, Vyuha.Vnode}])children = [
]opts = [strategy: :one_for_one, name: Vyuha.Supervisor]
Supervisor.start_link(children, opts)
end
end
Now if you try to run app with following command in terminal:
iex — name nav@127.0.0.1 -S mix
our app will fail as it is not able to find Vyuha.Vnode module.
So go ahead and create a Vnode.ex file in lib folder and paste the below code in that file:
defmodule Vyuha.Vnode do
require Logger
@behaviour :riak_core_vnodedef start_vnode(partition) do
:riak_core_vnode_master.get_vnode_pid(partition, __MODULE__)
enddef init([partition]) do
{:ok, %{partition: partition}}
enddef handle_command(:ping, _sender, %{partition: partition} = state) do
Logger.warn("got a ping request!")
{:reply, {:pong, partition}, state}
enddef handle_handoff_command(_fold_req, _sender, state) do
{:noreply, state}
enddef handoff_starting(_target_node, state) do
{true, state}
enddef handoff_cancelled(state) do
{:ok, state}
enddef handoff_finished(_target_node, state) do
{:ok, state}
enddef handle_handoff_data(data, state) do
{:reply, :ok, state}
enddef encode_handoff_item(object_name, object_value) do
""
enddef is_empty(state) do
{true, state}
enddef delete(state) do
{:ok, state}
enddef handle_coverage(req, key_spaces, sender, state) do
{:stop, :not_implemented, state}
enddef handle_exit(pid, reason, state) do
{:noreply, state}
enddef terminate(reason, state) do
:ok
end
end
We will dig deep in Vnode module later, but for now all we need to understand is that Vnode module is required in order to initialize the riak_core ring.
Now again try to run above command
iex --name nav@127.0.0.1 -S mix
App should start without any error.
Initialization 2: Interacting with Vnode
As I said earlier Vnodes are smallest unit to hold our data and business logic and we won’t be directly interacting with Vnodes, rather we will interact with riak_core_vnode_master process. This process is responsible for handling all of the incoming request and forward them to vnodes.
Since vnodes life cycle is being internally handled by riak_core, we don’t much care about their life cycle(we know riak_core will take care of them and spawn one when needed).
However riak_core_vnode_master have a different story . We need to initialize this in our application along with the riak_core ring and since we are initializing it in our application we need to build a supervisor around it otherwise if it crashes it will bring the entire system down with it.
While initializing the riak_core_vnode_master process we pass it the name of Vnode module name as shown in below line.
worker(:riak_core_vnode_master, [Vyuha.Vnode], id: Vuha.Vnode_master_worker)
Hint: In above code, worker is a method that is imported from Supervisor module . You can read it like, call start_async method on the riak_core_vnode_master and pass Vyuha.Vnode as arguments to it and also register this process under the name of Vyuha.Vnode_master_worker.
It’s common to give names to processes under supervision so that other processes can access them by name without needing to know their pid. This is useful because a supervised process might crash, in which case its pid will change when the supervisor restarts it. By using a name, we can guarantee the newly started process will register itself under the same name, without a need to explicitly fetch the latest pid.
So now Vyuha.ex file will look something like this:
defmodule Vyuha do
use Applicationdef start(_type, _args) do
import Supervisor.Spec, warn: false
:ok = :riak_core.register([{:vnode_module, Vyuha.Vnode}])children = [
worker(:riak_core_vnode_master, [Vyuha.Vnode], id: Vuha.Vnode_master_worker)
]opts = [strategy: :one_for_one, name: Vyuha.Supervisor, max_restarts: 5, max_seconds: 10]
Supervisor.start_link(children, opts)
end
end
And in the last line
supervise(children, strategy: :one_for_one, max_restart: 5, max_second: 10)
we tell supervisor to use one_for_one strategy, which mean spawn one new process every time riak_core_vnode_master process crashes while max number of attempts to restart should be 5 and it should wait for 10 secs for process to get started.
Code up to this point can be found here.
Read Next post here.