You still have not provided an example of a full event. When you do I can provide you a solution for your issue. If it contains sensitive information, just change the values, but keep the formatting.
... View more
Enclose your sample event in either backticks ( 😞
2019-08-02T00:00:00.000 timestamp1 Jan 1, 1970 00:01:02.003 timestamp2`
or code tags:
< pre> ... < /pre>
(remove the space after the opening less than sign)
2019-08-02T00:00:00.000 timestamp1 Jan 1, 1970 00:01:02.003 timestamp2
... View more
Hello @amahesh3 ,
Your field extraction is not created properly, because it does not appear to take into account locations with spaces in the name. You need to provide an example of a some events with locations with spaces in the name, your current extraction configuration and then someone can assist with the proper replacement for the field extraction.
Hope this helps.
... View more
Hi @sbimizry ,
If you have not included a time value anywhere in your lookup, then you cannot do this. Lookup files are basically state tables that the owner defines and updates. This means that the owner also defines which fields to include in the lookup, which may or may not (most do not) have a field that references a time value. Even if it DOES reference a time value, it may not be the time value you are thinking of. You would need some logic that executes when you update / create your lookup to add a time value that equates to the execution time of the creation / update of the lookup. Once you have a time field, you can re-map it to the _time field, which should allow you to use search earliest=-24h@h (you don't need latest=now(), Splunk assumes that if you don't provide a latest= statement).
If you HAVE included a time field in your lookup then you can also use @woodcock 's solution above:
| inputlookup lookup
| addinfo
| where ( >= info_min_time AND <= info_max_time)
I hope this information provides you with your answer.
... View more
Hi @prcough ,
You're missing the command to allow multiple values:
transforms.conf
[my_sourcetype]
DELIMS = "\"{,}", ": "
MV_ADD = true
You could also try:
[my_sourcetype]
REGEX = '([^']+)': '([^']+)'
FORMAT = $1::$2
MV_ADD = true
... View more
From anecdotal evidence, I hear that the same configuration in ELK requires twice the hardware, and twice the administration costs (people to administer the product). Visualizations are easier to create in Splunk because they allow drag-and-drop for dashboards, where those have to be coded in ELK. ELK does not scale as well as Splunk does either from what I hear.
... View more
Be a little bit more specific. If you have three values for C1=A of 10, 4, 5. What would you consider the difference? Difference only works for two values. Unless you mean the difference of the most recent value, minus all previous values (Ie; 10 - 4 - 5 = 1).
C1 C2
----------
A 10
A 4
A 5
What would you consider the "difference" for the above?
... View more
Your explanation is too confusing. Try putting together a few examples of data rows, with sensitive information removed, along with your desired output.
... View more
While it is documented (in different places that may not make a lot of sense), I would be more interested in the reason why you are asking this question? Do you think that ES lookups are breaking something in search? Are you just curious how ES works?
Why are you asking this question? If you can answer this, you might get more informative answers.
... View more
Still not fixed. Points to "http://manage+internal+lookups/" instead of "https://docs.splunk.com/Documentation/ES/5.3.0/Admin/Manageinternallookups"
... View more
Hi @jordanmedved ,
Here is a regex you can use:
| rex "\"properties\": {\"msg\":\"(?\S+) request from (?\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})(:(?\d+))? to (?\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})(:(?\d+))?. Action: (?[^\"]+)\""
You could also try specifying the field as well:
| rex field=properties.msg "(?\S+) request from (?\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})(:(?\d+))? to (?\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})(:(?\d+))?. Action: (?[^\"]+)"
Let me know if that works for you.
... View more
What is the purpose of your search head pooling? Do you have more than 10 users at any given time running Splunk searches? What is the hardware specification of the search heads (CPU, memory, disk space)?
... View more
Hi @rajasekhar14 ,
It's not straight PCRE, it includes the path translations as specified in the props.conf documentation.
You could use something like:
[source::.../(messages|performance)/345/345/*.blob]
sourcetype = json
SHOULD_LINEMERGE = false
LINE_BREAKER = ([\r\n]+){\s*messages
I would recommend using SHOULD_LINEMERGE = false & LINE_BREAKER instead of BREAK_ONLY_BEFORE.
See if that works for you.
... View more
You can verify that it's set properly with the following btool command:
/opt/splunk/bin/splunk btool props list
You should notice that for EVERY stanza there will be a line that says:
TRANSFORMS-set_index = set_index
That shows that the props.conf part of it is setup correctly.
Found out the issue. Host is reported as host::<hostname> not <hostname> . I have updated the answer above, and validated it works on a standalone instance I'm working on.
... View more
Hi @dglass0215 ,
You could do this by modifying props.conf & transforms.conf on your heavy forwarders/indexers:
props.conf
[default]
TRANSFORMS-set_index = set_index
transforms.conf
[set_index]
SOURCE_KEY = MetaData:Host
DEST_KEY = _MetaData:Index
REGEX = ^(?:host::)?(\w\w\w)
FORMAT = $1
This will take every event, look at the first 3 characters of the host value, and set those 3 characters as the target index for ingestion. You would have to make sure those indexes exist, or the events will be dropped because the index does not exist.
Hope this helps.
... View more
Hi @robertlynch2020 ,
The answers link that @vnravikumar posted can help you achieve what you want to do. You would have to know a little bit about javascript to do it. However, as per the answers response, you cannot remove "Splunk 7.2.6" without violating the Splunk license.
You might (you're assuming the risk of violating Splunk license terms) be able to add to the title tab by modifying the common.js to include a new variable for the host value (pulled from the URL):
Change (from $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/build/pages/enterprise/common.js):
setPageTitle:function(title){this.deferreds.serverInfo.done(function(){var version=this.model.serverInfo.getVersion()||("N/A").t(),isLite=this.model.serverInfo.isLite();var hn=location.host;document.title=splunkUtils.sprintf(("%s | host: %s | Splunk %s %s").t(),title,hn,isLite?"Light":"",version)}.bind(this))}
To:
setPageTitle:function(title){this.deferreds.serverInfo.done(function(){var version=this.model.serverInfo.getVersion()||("N/A").t(),isLite=this.model.serverInfo.isLite();var hostvalue=location.host;document.title=splunkUtils.sprintf(("%s | %s | Splunk %s %s").t(),title,hostvalue,isLite?"Light":"",version)}.bind(this))}
You would need to restart Splunk after this, and also clear your browser cache (better to test in incognito mode instead).
After this your tab would look like:
TPS_VIEW | mxwiki.murex.com | Splunk 7.2.6
... View more
Hi @brdr ,
This message is usually an indication of improperly configured storage for indexing operations. If you're running into a situation where your indexers have less than 5GB (the default threshold for this message) of free disk space for the hot/cold storage volumes, you probably have one of the following situations:
1. You have not properly configured indexes.conf settings for volume management that allows Splunk to clean up space as needed for the hot/cold volumes.
2. You have not provided the supported default minimum disk space for /opt/splunk (or wherever Splunk is installed) of 300GB and search operations are overfilling that space causing this message.
From your desription, it sounds like #2 in this case. If it were me, I would investigate the server(s) in question to determine which folder is causing the issue (most likely something in /opt/splunk/var). From the CLI (Linux) I would use the following command:
df -sh /opt/splunk
This will show how much storage is being used by each directory in /opt/splunk. If it's var, then I would check var as well:
df -sh /opt/splunk/var
Depending upon which directory below that is causing the issue, there are different steps to take, but you would have an idea of where the offending data resides.
If you have NOT allocated the default minimum of 300GB for /opt/splunk, I would highly recommend that you do that. If the hot/cold data shares the same mount point as /opt/splunk then I would recommend that you review your indexes.conf to implement volume/index management that does not allow your disk to fill up, and rolls data appropriately to do this. Typically, I recommend that configurations be made to leave a 5% - 10% chunk of free space on the volume.
I hope this helps.
... View more
Hi @lmvmandadi ,
The answer to your question of whether it's safe to delete is, "it depends". If you're in a SHC environment you might be able to remove items from that folder without some major impacts. However, given the questions and the responses in the thread you've posted, I would probably look at re-building that particular search head.
1. Stop the problem search head
2. Back up the /opt/splunk (or whatever $SPLUNK_HOME is associated to) folder
3. Move or delete the /opt/splunk folder
4. Install a clean copy of Splunk the same version as your other members
5. Add the clean install member to the SHC and let the SHC captain re-sync all the files
The downside to the above is that you will lose any changes that you made on that particular server (not lose permanently because we backed up all the config files, but the changes will need to be re-done on the clean copy).
I think this might be a better solution, given that you'll probably spend more time troubleshooting SHC issues, trying to resolve them and potentially introduce other problems into the SHC.
Hope this helps.
... View more
Yes, it is resource intensive. And as @woodcock mentions, you should try to avoid using it if you can. It does have its uses and can be considered in limited situations. You can try his suggestion below, using streamstats . Just be aware that streamstats is memory restricted (default is 200MB per search), so if you have a very large set of data to search through, streamstats will stop calculating at some point.
... View more