I just configured two new indexes, set the serverclass.conf for them and inputs.conf to assign sourcetype, index etc.
The new folders (named after the index) exist under /opt/splunk/var/lib/splunk . But there is no data saved under /db folder.
Now i see in splunkd.log that the events are coming in, as follows.
03-27-2012 13:20:20.238 -0700 WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event. Context="source::/apps/oracle/domains/xyz_osb_domain/servers/osb_server2/logs/osb_server2.log|host::xyz.xyz.com|osb_serverlog|remoteport::43536" Text=" <FIX_VO>VO</FIX_VO>\n <FIX_EC>EC</FIX_EC>\n <ACCOUNT_NO>542EDDB9E60AD97EF94CEBCEFBE3CF..."
Where do i look to see what is wrong with the new indexes i made, the serverclass.conf changes done and the inputs.conf configuration? And why the data is not getting indexed. Since the forwarders look like they are working fine.
If I understood you correctly, you have created 2 new indexes on your indexer. Then you you have deployed an inputs.conf to your forwarders, and this inputs.conf specifies that some input-file should be read, and have its index=XXXXX set to the newly created index, right?
1) How do you know that data is not getting indexed?
MAX_TIMESTAMP_LOOKAHEADfor your sourcetype? These should be set on the indexer, unless you have a heavy/full forwarder?
3) Network configuration
By answering these questions, you should be a lot closer to finding WHERE the problem lies.
UPDATE to your comments below.
1) Did you check Manager->indexes on the indexer? It would not show up on a search head.
What is your setup? Single server combining Indexing, Searching and Deployment, or dedicated servers for each role?
Yes, I assumed that the event you posted was trying to make it into your new index. Something is wrong with the timestamp parsing.
Have you tried searching for the string "< ACCOUNTNUMBER>5233423..." (from your original question) with
index=*, for all time?
That should tell you where it ended up.
2) Have you checked that you have access to the index, and that it is searched by default. Does not necessarily have to be that way, even if you are an admin. (but if there is an EventCount of 0 in Manager->indexes for that index, that should not be the problem).
Have you made sure that you (or rather the splunk process) has permissions to write data to the filesystem where your new index is located? Is there space enough?
Also, as you certainly know already, any search for
index=a index=b will always fail, since no event can belong to two indexes.
index=a OR index=b should have a much better chance of succeeding.
3) No, data can be sent in several different ways from a forwarding instance. Doublecheck inputs.conf on the forwarder for any misspellings regarding index=... Doublecheck the outputs.conf settings on the forwarder. Doublecheck the inputs.conf settings on the server to ensure that you are indeed listening to the correct ports. Doublecheck indexes.conf on the server.
Triple-check your deployed application so that you do not have a configuration file precedence problem, i.e. config settings in one file being overridden by other settings in another file.
BTW are you deploying deploymentclient.conf through the Deployment Server? Surely you also deploy the inputs.conf and possibly also the outputs.conf that way.
Hope this helps,
1)All Time search, Manager->indexes all show 0. Have not checked props.conf, is this for the time parse errors i see in the splunkd.log?
2)My user's role is admin. So that should not be an issue. I specified the index=soa and index=osb part of the search on all time.
3)I see OS index getting system resource data fine from the same forwaders. We use a deployment server and drop a deploymentclient.tar that has app.conf and deploymentclient.conf which tell the forwarder how to talk to the deployment server. As far as the right port, i assume the port is the same for system data and log data.
Forgive me for stating what could be obvious, but check your indexs page under Manager to ensure the newly created indexes show up and they are enabled. Typically a restart of splunk ensures the new configuration is used, and a check of the $SPLUNK_HOME/var/log/splunk/splunk.log will return errors if you are incorrectly configured. Otherwise, from my experience, ensure you didn't fat-finger any of the configuration files. That seems to be my demise 50% of the time.
Thanks for all the suggestions. Kristian, they are going to help a lot in the future.
But in this instance a simple restart of the Indexer fixed it. I thought doing a 'splunk reload deploy-server' on the deployment server would take care of creating new indexes without a restart of indexers
. Am i wrong in assuming that?
Well, if you push out indexes.conf to your indexer(s), and specify restartSplunk=true in your serverclass.conf stanza (either under [global] or the specific serverClass/app) that SHOULD be enough.
Good to hear you solved the problem.
When dealing with indexes, a restart of splunk is required to poll the new information. The 'splunk reload deploy-server' will only re-read information pertaining to deployment server and client aspects, whereas the indexes is not thought of as part of the deployment-server/client aspect. Please mark this question as answered for the appropriate person please. Thanks!
Aah, I see, I think.
You are running the DS on the indexer, right?
You are not pushing the indexes.conf to the indexer as (part of) an app? If you were, reloading the deploy-server should be enough (given that you have restartSplunkd=true in the correct part of your serverclass.conf)