All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi all, i still failed to decrypt the epo logs. this is my config. [tcp://6514] connection_host = ip host = DCHQ-SIMSL-01 source = 10.220.34.23:6514 sourcetype = mcafee:epo:syslog index = mcafee ... See more...
hi all, i still failed to decrypt the epo logs. this is my config. [tcp://6514] connection_host = ip host = DCHQ-SIMSL-01 source = 10.220.34.23:6514 sourcetype = mcafee:epo:syslog index = mcafee [SSL] serverCert=/splunk/cert/splunk-epo-remote.pem requireClientCert=false cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256   any ideas? huhh
The eval examples I provided yesterday are for SPL queries.  They can be modified for props.conf files, however.
i dont know why splunk does not distribute clear instructions or tools to install and configure linux properly. redhat 9.x does not have init.d so you need to set boot-start with managed =1, but the... See more...
i dont know why splunk does not distribute clear instructions or tools to install and configure linux properly. redhat 9.x does not have init.d so you need to set boot-start with managed =1, but the service even if installed needs also systemctl ENABLE SplunkForwarder.service. In redhat 8 this is not the case.   the latest forwarder 9.1.1 also wont setup properly if you don't use user-seed.conf    I came out with this which does it job somehow, would be nice if someone would add his ideas to make it better.   (im running splunk as root for testing perpouses)         #!/bin/bash SPLUNK_FILE="splunkforwarder-9.1.1-64e843ea36b1.x86_64.rpm" rpm -ivh splunkforwarder-9.1.1-64e843ea36b1.x86_64.rpm ##change permission to root chown -R root:root /opt/splunkforwarder ##create user-seed.conf file that Splunk accepts to set admin credentials without user interaction sudo touch /opt/splunkforwarder/etc/system/local/user-seed.conf ##pass Splunk admin credentials into file sudo cat <<EOF > /opt/splunkforwarder/etc/system/local/user-seed.conf [user_info] USERNAME = admin PASSWORD = changeme EOF ##configure splunk /opt/splunkforwarder/bin/splunk set deploy-poll 192.168.68.129:8089 --accept-license --answer-yes --auto-ports --no-prompt /opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 0 /opt/splunkforwarder/bin/splunk start --no-prompt --answer-yes ##configure splunk Redhat 9.x #/opt/splunkforwarder/bin/splunk set deploy-poll 192.168.68.129:8089 --accept-license --answer-yes --auto-ports --no-prompt #/opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 1 #systemctl enable SplunkForwarder.service #systemctl start SplunkForwarder.service      
Hey @jwilczek I'm afraid not. Have you tried to run a later version of Splunk, I suspect you won't run into the problem there.
Hi @richgalloway , This eval group and eval user stanza have to be in the transforms.conf right ? thanks
Start with this and look to see when you last got events for that index and which host or host is was. | tstats latest(_time) as LatestEvent where index=waf_imperva by host Then back track from the... See more...
Start with this and look to see when you last got events for that index and which host or host is was. | tstats latest(_time) as LatestEvent where index=waf_imperva by host Then back track from there to figure out why you don't have any events
The query should have the result of index = waf_imperva. However, the result is not there. How to I ensure to include waf_imperva in the query or how do I troubleshoot why not there? 
Hi @smithy001, the capacity of storages must be calculated in a Capacity Plan: yo have to define how long data remain in Warm Buckets before passing to Cold. If you have few data in Hot/Warm and a... See more...
Hi @smithy001, the capacity of storages must be calculated in a Capacity Plan: yo have to define how long data remain in Warm Buckets before passing to Cold. If you have few data in Hot/Warm and a full storage in Col status, you have to rebuild your Capacity Planning. Anyway, as I said, Cold data are usually in less expensive storage, so you should analyze your data to define what's the correct point of status change. So you could have 2 months instead of one month in Warm status, in this way you'll have better performaces in searches, but anyway you have to correctly analyze and design your data flows in a Capacity Planning. Ciao. Giuseppe
Hi @wkk, you could try something like this: index=your_index | stats values(SUBMITTED_FROM) AS SUBMITTED_FROM values(STAGE) AS STAGE BY SESSION_ID | mvexpand SUBMITTED_FROM | mvexpand STA... See more...
Hi @wkk, you could try something like this: index=your_index | stats values(SUBMITTED_FROM) AS SUBMITTED_FROM values(STAGE) AS STAGE BY SESSION_ID | mvexpand SUBMITTED_FROM | mvexpand STAGE | search SUBMITTED_FROM=startPage STAGE=submit | stats count BY SESSION_ID Ciao. Giuseppe
Hi, just to help anyone else. This builds on gmorris_splunk answer. Version:8.2.6 Below only shows Date Range. Note the removal of the commas, and the use of empty curly brackets. one thin... See more...
Hi, just to help anyone else. This builds on gmorris_splunk answer. Version:8.2.6 Below only shows Date Range. Note the removal of the commas, and the use of empty curly brackets. one thing I could not get to work, was to display only the 'Between' option . <panel> <html> <style> body, .dashboard-body, .footer, .dashboard-panel, .nav { background: #F8FCF7; } div[data-test^='time-range-dialog'] { background-color: #EDF8EB; min-width: 300px !important; width: 400px !important; } div[data-test^='body'] { background-color: #D1ECCC; } div[data-test-panel-id^='date'] {} !important; div[data-test-panel-id^='presets'] { display: none !important; } div[data-test-panel-id^='dateTime'] { display: none !important; } div[data-test-panel-id^='advanced'] { display: none !important; } div[data-test-panel-id^='realTime'] { display: none !important; } div[data-test-panel-id^='relative'] { display: none !important; } </style> </html> </panel> . 
Hi!   I have a fallowing table: SESSION_ID SUBMITTED_FROM STAGE 1   submit 1 startPage someStage1 2   submit 2 page1 someStage1 2 page2 someStage2 How could ... See more...
Hi!   I have a fallowing table: SESSION_ID SUBMITTED_FROM STAGE 1   submit 1 startPage someStage1 2   submit 2 page1 someStage1 2 page2 someStage2 How could I count the number of SESSION_IDs that has SUBMITTED_FROM=startPage and STAGE=submit? So looking at the above table the outcome of that logic should be 2 SESSION_IDs
We had a similar finding from Splunk with high I/O wait time on Search Heads. I have used the folllowing search to monitor index=_introspection sourcetype=splunk_resource_usage component=IOStats | ... See more...
We had a similar finding from Splunk with high I/O wait time on Search Heads. I have used the folllowing search to monitor index=_introspection sourcetype=splunk_resource_usage component=IOStats | eval avg_wait_ms = 'data.avg_total_ms' | search data.mount_point="/apps/splunk" | eval sla=10 | timechart limit=30 minspan=60s partial=f avg(data.avg_total_ms) as avg_wait_ms max(sla) AS sla by host Use a trellis format (split by host) timechart to dispaly. The sla=10 field is to show the 10ms Splunk recommended limit. I haven't been able to work out why we have high I/O on the Search Heads though, indexer cluster seems to perform OK. The Search Head Captain has notably higher I/O wait compared to others. There has also been issues with KV Store so wondering if that is related. Note: I/O wait time is not a configuration that can be set. It is the result of the operations being carried out on the disk
Hi, The filename is called "lookup_edit" and you can navigate to it using the UI: Settings - User Interface - Views.
It is not clear what your expected result would look like - please can you explain further
Hi All,   I have this query that runs  | tstats latest(_time) as LatestEvent where index=* by index, host | eval LatestLog=strftime(LatestEvent,"%a %m/%d/%Y %H:%M:%S") | eval duration = now() - ... See more...
Hi All,   I have this query that runs  | tstats latest(_time) as LatestEvent where index=* by index, host | eval LatestLog=strftime(LatestEvent,"%a %m/%d/%Y %H:%M:%S") | eval duration = now() - LatestEvent | eval timediff = tostring(duration, "duration") | lookup HostTreshold host | where duration > threshold | rename host as "src_host", index as "idx" | fields - LatestEvent | search NOT (index="cim_modactions" OR index="risk" OR index="audit_summary" OR index="threat_activity" OR index="endpoint_summary" OR index="summary" OR index="main" OR index="notable" OR index="notable_summary" OR index="mandiant")   The result is below   Now how do i add  index = waf_imperva . Thanks   Regards, Roger
What would be your expected output?
Change you fieldForLabel and fieldForValue attributes <fieldForLabel>st_time</fieldForLabel> <fieldForValue>st_time</fieldForValue>
Thanks for the reply...I understand the use of 2 separate volumes.   I was asking if anyone could see a situation where the cold [spindle] volume could become full whilst the hot/warm[ssd] still ha... See more...
Thanks for the reply...I understand the use of 2 separate volumes.   I was asking if anyone could see a situation where the cold [spindle] volume could become full whilst the hot/warm[ssd] still had capacity if both were sized the same... 6 months on SSD 6 months on spindle...
the actual config has not been decided yet.   I'm trying to find the best one to spread the buckets across the indexers.
Thanks much @ITWhisperer It really worked