Solr Cloud 4.5
Solr Cloud is designed to provide the highly available, fault tolerant environment that can index data for searching. In this environment data can be organized into multiple pieses, shards or can reside in several machines, with replicas it provides the redundancy for both scalability and fault tolerance. Integration with ZooKeeper it helps to manage the overall environment so that both indexing and search requests can be routed properly.
Following section you will get an understanding on
- How to distribute data over multiple instances by using ZooKeeper and creating shards.
- How to create redundancy for shards by using replicas.
- How to create redundancy for the overall cluster by running multiple ZooKeeper instances
This section explains SolrCloud and its inner workings in detail, but before you dive in, it's best to have an idea of what it is you're trying to accomplish. This page provides a simple tutorial that explains how SolrCloud works on a practical level, and how to take advantage of its capabilities. We'll use simple examples of configuring SolrCloud on a single machine.
Simple Two-Shard Cluster on the Same Machine
Creating a cluster with multiple shards involves two steps:
- Start the "overseer" node, which includes an embedded ZooKeeper server to keep track of your cluster.
- Start any remaining shard nodes and point them to the running ZooKeeper.
In this example, you'll create two separate Solr instances on the same machine. This is not a production-ready installation, but just a quick exercise to get you familiar with SolrCloud.
For this exercise, we'll start by creating two copies of the example directory that is part of the Solr distribution:
These copies of the example directory can really be called anything. All we're trying to do is copy Solr's example app to the side so we can play with it and still have a stand-alone Solr example to work with later if we want.
Next, start the first Solr instance, including the -DzkRun parameter, which also starts a local ZooKeeper instance:
Let's look at each of these parameters:
-DzkRun Starts up a ZooKeeper server embedded within Solr. This server will manage the cluster configuration. Note that we're doing this example all on one machine; when you start working with a production system, you'll likely use multiple ZooKeepers in an ensemble (or at least a stand-alone ZooKeeper instance). In that case, you'll replace this parameter with zkHost=<ZooKeeper Host:Port>, which is the hostname:port of the stand-alone ZooKeeper.
-DnumShards Determines how many pieces you're going to break your index into. In this case we're going to break the index into two pieces, or shards, so we're setting this value to 2. Note that once you start up a cluster, you cannot change this value. So if you expect to need more shards later on, build them into your configuration now (you can do this by starting all of your shards on the same server, then migrating them to different servers later).
-Dbootstrap_confdir ZooKeeper needs to get a copy of the cluster configuration, so this parameter tells it where to find that information.
-Dcollection.configName This parameter determines the name under which that configuration information is stored by ZooKeeper. We've used "myconf" as an example, it can be anything you'd like.
At this point you have one sever running, but it represents only half the shards, so you will need to start the second one before you have a fully functional cluster. To do that, start the second instance in another window as follows:
Because this node isn't the overseer, the parameters are a bit less complex:
-Djetty.port The only reason we even have to set this parameter is because we're running both servers on the same machine, so they can't both use Jetty's default port. In this case we're choosing an arbitrary number that's different from the default. When you start on different machines, you can use the same Jetty ports if you'd like.
-DzkHost This parameter tells Solr where to find the ZooKeeper server so that it can "report for duty". By default, the ZooKeeper server operates on the Solr port plus 1000. (Note that if you were running an external ZooKeeper server, you'd simply point to that.)
At this point you should have two Solr windows running, both being managed by ZooKeeper. To verify that, open the Solr Admin UI in your browser and go to the Cloud screen:
Use the port of the first Solr you started; this is your overseer. You can go to the
You should see both node1 and node2, as in:
Post data to newly created nodes. ( You can do this any way you like, but the easiest way is to use the exampledocs)
java -Durl=http://localhost:<port1>/solr/collection1/update -jar post.jar mem.xml
java -Durl=http://localhost:<port2>/solr/collection1/update -jar post.jar monitor2.xml
At this point each shard contains a subset of the data, but a search directed at either server should span both shards. For example, the following searches should both return the identical set of all results:
The reason that this works is that each shard knows about the other shards, so the search is carried out on all cores, then the results are combined and returned by the called server.
In this way you can have two cores or two hundred, with each containing a separate portion of the data.
But what about providing high availability, even if one of these servers goes down? To do that, you'll need to look at replicas.
Two-Shard Cluster with Replicas
In order to provide high availability, you can create replicas, or copies of each shard that run in parallel with the main core for that shard. The architecture consists of the original shards, which are called the leaders, and their replicas, which contain the same data but let the leader handle all of the administrative tasks such as making sure data goes to all of the places it should go. This way, if one copy of the shard goes down, the data is still available and the cluster can continue to function.
Start by creating two more fresh copies of the example directory:
Just as when we created the first two shards, you can name these copied directories whatever you want.
If you don't already have the two instances you created in the previous section up and running, go ahead and restart them. From there, it's simply a matter of adding additional instances. Start by adding node3:
Notice that the parameters are exactly the same as they were for starting the second node; you're simply pointing a new instance at the original ZooKeeper. But if you look at the SolrCloud admin page, you'll see that it was added not as a third shard, but as a replica for the first:
This is because the cluster already knew that there were only two shards and they were already accounted for, so new nodes are added as replicas. Similarly, when you add the fourth instance, it's added as a replica for the second shard:
Post data to newly created nodes. ( You can do this any way you like, but the easiest way is to use the exampledocs)
java -Durl=http://localhost:<port3>/solr/collection1/update -jar post.jar money.xml
Execute the search queries on all for nodes and check the data distribution through out the node. use distrib=false for easily identify the data distribution among the nodes.
If you were to add additional instances, the cluster would continue this round-robin, adding replicas as necessary. Replicas are attached to leaders in the order in which they are started, unless they are assigned to a specific shard with an additional parameter of shardId (as a system property, as in -DshardId=1, the value of which is the ID number of the shard the new node should be attached to). Upon restarts, the node will still be attached to the same leader even if the shardId is not defined again (it will always be attached to that machine).
So where are we now? You now have four servers to handle your data. If you were to send data to a replica, as in:
the course of events goes like this:
- Replica (in this case the server on port 7500) gets the request.
- Replica forwards request to its leader (in this case the server on port 7574).
- The leader processes the request, and makes sure that all of its replicas process the request as well.
In this way, the data is available via a request to any of the running instances, as you can see by requests to:
But how does this help provide high availability? Simply put, a cluster must have at least one server running for each shard in order to function. To test this, shut down the server on port 7574, and then check the other servers:
You should continue to see the full set of data, even though one of the servers is missing. In fact, you can have multiple servers down, and as long as at least one instance for each shard is running, the cluster will continue to function. If the leader goes down – as in this example – a new leader will be "elected" from among the remaining replicas.
Note that when we talk about servers going down, in this example it's crucial that one particular server stays up, and that's the one running on port 8983. That's because it's our overseer – the instance running ZooKeeper. If that goes down, the cluster can continue to function under some circumstances, but it won't be able to adapt to any servers that come up or go down.
That kind of single point of failure is obviously unacceptable. Fortunately, there is a solution for this problem: multiple ZooKeepers.
Using Multiple ZooKeepers in an Ensemble
To truly provide high availability, we need to make sure that not only do we also have at least one shard server running at all times, but also that the cluster also has a ZooKeeper running to manage it. To do that, you can set up a cluster to use multiple ZooKeepers. This is called using a ZooKeeper ensemble.
A ZooKeeper ensemble can keep running as long as more than half of its servers are up and running, so at least two servers in a three ZooKeeper ensemble, 3 servers in a 5 server ensemble, and so on, must be running at any given time. These required servers are called a quorum.
In this example, you're going to set up the same two-shard cluster you were using before, but instead of a single ZooKeeper, you'll run a ZooKeeper server on three of the instances. Start by cleaning up any ZooKeeper data from the previous example:
Next you're going to restart the Solr servers, but this time, rather than having them all point to a single ZooKeeper instance, each will run ZooKeeper and listen to the rest of the ensemble for instructions.
You're using the same ports as before – 8983, 7574, 8900 and 7500 – so any ZooKeeper instances would run on ports 9983, 8574, 9900 and 8500. You don't actually need to run ZooKeeper on every single instance, however, so assuming you run ZooKeeper on 9983, 8574, and 9900, the ensemble would have an address of:
This means that when you start the first instance, you'll do it like this:
You'll notice a lot of error messages scrolling past; this is because the ensemble doesn't yet have a quorum of ZooKeepers running.
Notice also, that this step takes care of uploading the cluster's configuration information to ZooKeeper, so starting the next server is more straightforward:
Once you start this instance, you should see the errors begin to disappear on both instances, as the ZooKeepers begin to update each other, even though you only have two of the three ZooKeepers in the ensemble running.
Next start the last ZooKeeper:
Finally, start the last replica, which doesn't itself run ZooKeeper, but references the ensemble:
Just to make sure everything's working properly, run a query:
and check the SolrCloud admin page:
Now you can go ahead and kill the server on 8983, but ZooKeeper will still work, because you have more than half of the original servers still running. To verify, open the SolrCloud admin page on another server, such as: