All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick  I am using single standalone machine and the data coming through the nework directory. That network directory produces files and then using inputs.conf I am monitoring into Splunk.    
    index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" | fields host | transaction host startswith="To:" | search "To: <Mail-Address>" | rex field=_raw "Host:(?<src_... See more...
    index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" | fields host | transaction host startswith="To:" | search "To: <Mail-Address>" | rex field=_raw "Host:(?<src_host_1>.*) - Service:(?<Service_1>.*) State:(?<State_1>.*)" | rex field=_raw "Subject: (?<Subject>.*)" | rex field=Subject "PROBLEM - (?<src_host_2>.*) - (?<Service_2>.*) is (?<State_2>.*)" | rex field=_raw "(?<Additional_Info>.*)\nTo:" | eval Service= if(isnull(Service_1),Service_2,Service_1) ,src_host= if(isnull(src_host_1),src_host_2,src_host_1) ,State= if(isnull(State_1),State_2,State_1) | fields host ,Service,src_host,State,Subject,Additional_Info | lookup hostdata_lookup.csv host as src_host | table src_host,Service,State,_time, cluster, isvm | rename _time as Start_time | search isvm=N AND cluster=*EDGE* | eval Start_time=strftime(Start_time, "%m/%d/%Y - %H:%M:%S") | sort Start_time   For security reason, removed Mail-addr  
Please share some anonymised representative events so we can better understand what you are dealing with. Please use a code block </> so that they can be used to simulate your situation.
@PickleRickcould you share some documentation on this ? Or is it enough to add this in the sourcetypes configuration in the inputs configuration file, if I'm not mistaken ?
The storage will on a DELL EMC storage, and considering the splunk recomandations and the SAN caracteristics, it will work smoothly, on the paper. I will test with few index and check Thanks a ... See more...
The storage will on a DELL EMC storage, and considering the splunk recomandations and the SAN caracteristics, it will work smoothly, on the paper. I will test with few index and check Thanks a lot for your help !
Hi, guys : forgive my English level first, it is not my native language. I have a distributed search which consists of an indexer instance and a search head instance, Their host specifications are ... See more...
Hi, guys : forgive my English level first, it is not my native language. I have a distributed search which consists of an indexer instance and a search head instance, Their host specifications are as follows:     indexer CPU:E5-2682 v4 @ 2.50GHz / 16Core Memory:32G Dsik:1.8TB(5000IOPS) search head: CPU:E5-2680 v3 @ 2.50GHz / 16Core Memory:32G Disk:200GB(3400IOPS).       I have 170G of raw logs ingested into splunk indexer every day ,5 indexes, one of which is 1.3TB in size. Its index name is tomcat , which stores the logs of the backend application. now the index is full. When I search for events in this index, the search speed is very slow. My search is     index=tomcat uri="/xxx/xxx/xxx/xxx/xxx" "xxxx"       I'm very sorry that I use xxx to represent a certain word because it involves the privacy issues of the API interface. I am searching for events from 7 days ago, no results found were returned for a long time,I even tried searching the logs for a specific day,but the search speed is still not ideal. If I wait about 5 minutes, I will gradually see some events appear on the page.  I checked the job inspector, I found that command.search.index, dispatch.finalizeRemoteTimeline, and dispatch.fetch.rcp.phase_0 execution cost is high   but these don't help me much.I tried leaving the search head and performing a search on the indexer web ui, but the search was still slow. this means that there is no bottleneck in the search head? During the search, I observed the various indicators of the host monitoring, the screenshot is as follows: It seems that the indexer server resources are not completely exhausted. So I tried restarting the indexer's splunkd service,Unexpectedly, the search speed seems to have been relieved,When I use the same search query and time range, it is gradually showing the events returned, although the speed does not seem to be particularly fast. Just as I was celebrating that I had solved the problem, my colleague told me the next day that the search speed seemed to be a little unsatisfactory again, although the search results would be gradually returned during the searching.so, this is not the best solution, it can only temporarily relieve. so, how do you think I should solve the problem of slow search speed? Is it to scale out the indexers horizontally and create a indexer cluster?    
Need to compare Host with Start_time(Icinga Problem) and End_time(Icinga Recovery), if the alert has been recovered within SLA( i.e, 15 minutes) take action or else nothing. Any help is appreciated.
@richgalloway  I am trying to extract them using RegEx. I select the event, choose Action, the Extract Fields, and select the method of extraction by regular expression.
You mean - for example -  having //1.2.3.4/idx1 and //1.2.3.4/idx2 mounted to /srv/splunk_cold on idx1 and idx2 respectively? Yes, that will work. Of course performance of searching over NFS will no... See more...
You mean - for example -  having //1.2.3.4/idx1 and //1.2.3.4/idx2 mounted to /srv/splunk_cold on idx1 and idx2 respectively? Yes, that will work. Of course performance of searching over NFS will not be stellar and you might regret not using local storage but from the technical point of view it will work.
Your question is not clear. If you want to make your ingested data CIM-compliant you should do as @marnall says - create tags, make sure your fields are either CIM-conformant or create calculated fi... See more...
Your question is not clear. If you want to make your ingested data CIM-compliant you should do as @marnall says - create tags, make sure your fields are either CIM-conformant or create calculated fields and aliases to make them CIM-conformant. But as you're speaking about dashboards - if you want to use datamodels, just do that - search or do tstats over datamodels, not raw data. And use those searches to power your dashboard panels.
Ok I understand. And considering having two different NFS volumes on a SAN, one volume for each indexer, but the mounting point on the OS will have the same name for both indexers Can this solu... See more...
Ok I understand. And considering having two different NFS volumes on a SAN, one volume for each indexer, but the mounting point on the OS will have the same name for both indexers Can this solution work ?
1. Haven't we discussed it on Slack yesterday? (or was I discussing that with another person? The sourcetype was the same and the case was similar) 2. Your LINE_BREAKER should get rid of the "event"... See more...
1. Haven't we discussed it on Slack yesterday? (or was I discussing that with another person? The sourcetype was the same and the case was similar) 2. Your LINE_BREAKER should get rid of the "event": part already (it's within the capture group so it should be treated as line breaker and stripped). So apparently your settings are not applied at all. I'd say you probably have your props set on a wrong component.
@yuanliu Yes, I had to convert them to XML, so that I could extract the fields I needed. The logs are in French, and I was having issues parsing them
I have two searches, one search will produce icinga problem alerts and other search will produce icinga recovery alerts. I wanted to compare host with State fields, if the icinga alert has been recov... See more...
I have two searches, one search will produce icinga problem alerts and other search will produce icinga recovery alerts. I wanted to compare host with State fields, if the icinga alert has been recovered within 15 minutes duration no action to be taken else execute script. First search, below is the snippet.   Second query, below is the snippet    
eventtype=msad-rep-errors (host="*")|lookup EventCodes EventCode,LogName OUTPUTNEW desc|eval desc=if(isnull(desc),"Unknown EventCode",desc) | stats count by host,Type,EventCode,LogName,desc | lookup ... See more...
eventtype=msad-rep-errors (host="*")|lookup EventCodes EventCode,LogName OUTPUTNEW desc|eval desc=if(isnull(desc),"Unknown EventCode",desc) | stats count by host,Type,EventCode,LogName,desc | lookup DCs_tier0.csv host OUTPUTNEW domain offset_value | search offset_value=1 | search (host="*") (domain="*") | table host domain Type EventCode LogName desc    
No. Regardless of whether it's Splunk or any other solution that assumes it has full control over its data (in this case - contents of the colddb directory) configuring multiple instances of "somethi... See more...
No. Regardless of whether it's Splunk or any other solution that assumes it has full control over its data (in this case - contents of the colddb directory) configuring multiple instances of "something" over the same set of data is a pretty sure way to disaster. BTW, smartstore works differently than your normal storage tiering. Since it's an object storage and you can't just access files randomly, it uses a cache manager to bring whole buckets to cache when they're needed. It is good with some use cases but with others (frequent searching across multiple historical buckets not fitting on warm storage in total) it can cause performance headaches.
Wait a second. You're trying to say that regardless of what timezone you set in your preferences the event is still shown at the same time for the same event? (The time on the left, not the time with... See more...
Wait a second. You're trying to say that regardless of what timezone you set in your preferences the event is still shown at the same time for the same event? (The time on the left, not the time within the event itself obviously since this one is already ingested, indexed and it won't change). That should be impossible. BTW, what does your ingestion architecture look like for this source? File->UF->indexer? Where do you have your props.conf settings (on which component)?
You can do inline extraction with rex, e.g. | rex "lda\((?<to>[^\)]*)\)" which will extract a new field called to from the portion between the brackets  You can also set this up as a field extract... See more...
You can do inline extraction with rex, e.g. | rex "lda\((?<to>[^\)]*)\)" which will extract a new field called to from the portion between the brackets  You can also set this up as a field extraction - see Fields->Field Extractions and create a new field extraction there using the regex above and then, if lda(xxx) exists in your data, you will get a field called to  
I am also facing same issue, have you got any solution for this?
Hi @atr , check also the other hardware reuirement, to avoid next issues. let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splun... See more...
Hi @atr , check also the other hardware reuirement, to avoid next issues. let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors