All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How do I get slurm log content into Splunk?
Either you indeed have very strange data (from more than 10 years ago?) or you have badly onboarded sources and date is wrongly parsed.
Easiest is to just have two alerts.  There's practically *zero* downsides to just building a new search (you can start with your existing one!) and then creating an alert out of it once it appears li... See more...
Easiest is to just have two alerts.  There's practically *zero* downsides to just building a new search (you can start with your existing one!) and then creating an alert out of it once it appears like you've got the search all sorted out. That being said, I think you might just need to change source = "/tmp/unresponsive" sourcetype=cmi:gems_unresponsive to be able to do both at once.  I'm not sure what you need to change that to though. MAYBE - if /tmp/unresponsive is the source for either server, maybe all it needs is source = "/tmp/unresponsive" ( sourcetype=cmi:gems_unresponsive OR sourcetype=<whatever the sourcetype is for the other servers> ) And honestly, I'd go back to that core piece of the search (index=foo, source=bar, sourcetype=baz) and *find the events* first.  It should make it more obvious how to get their data in there too.
What's the actual regex you are using to capture it? And if you are sure you are using the right syntax because you copy pasted it from some working one - you could try to add | eval state = "up" as... See more...
What's the actual regex you are using to capture it? And if you are sure you are using the right syntax because you copy pasted it from some working one - you could try to add | eval state = "up" as the last command in the search to force it to be "up" and see if that works.  If it doesn't, then I'd say there's something else wrong with that syntax.
Splunk doesn't apply any inherent extraction for data as you illustrated. (By default, it extracts key-value pairs connected by equal (=) sign, and some structured raw events such as JSON.)  If you s... See more...
Splunk doesn't apply any inherent extraction for data as you illustrated. (By default, it extracts key-value pairs connected by equal (=) sign, and some structured raw events such as JSON.)  If you see more fields in ASA feeds, it must be the doing of Splunk Add-on for Cisco ASA.  You need to open up that add-on and see what it is doing.  Then, you can copy it if that is within copyrights.  Or you can develop your own extraction strategy to emulate what Splunk Add-on for Cisco ASA does, or more. Given that the two data sources are so close in format, there is also a possibility that Splunk Add-on for Cisco ASA has some configuration you can tweak to include the FTD data type.  Consult its documentation, or contact the developers. This board used to have an app forum that I no longer see.  Maybe it is now Splunk Dev?  You can if Splunk Add-on for Cisco ASA developers are in that forum.
I don't think that's the right app to read those events.  In any case, the app you have installed had its latest release in 2018 and references no Splunk version higher than 7.1, so it looks abandone... See more...
I don't think that's the right app to read those events.  In any case, the app you have installed had its latest release in 2018 and references no Splunk version higher than 7.1, so it looks abandoned. Instead, it looks like the  "Cisco Secure eStreamer Client Add-On for Splunk" (https://splunkbase.splunk.com/app/3662) might extract fields from records with FTD in them.  It seems like it focuses on events 430001, 430002, 430003 and 430005.  Still, it's worth a shot. Indeed, right now you could see if you have those - try a search like index=<your cisco index> FTD (430001 OR 430002 OR 430003 OR 430005) If that returns a few items (or lots), then the app I mention above should turn that into useful fields. If that search does NOT return any events, ... well, widen the time frame.  These seem like they might be less common events, not run of the mill "every tcp session makes 42 zillion of them" so it's possible there's only a few per day or something. In any case, happy splunking and I hope you find what you need! -Rich    
Hello I am getting this warning and the sample data looks like this 02-22-2012 17:01:12.280 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:01:12 2012) is outside of t... See more...
Hello I am getting this warning and the sample data looks like this 02-22-2012 17:01:12.280 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:01:12 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::/var/opt/jira/log/atlassian-jira-security.log|host::syn|atlassian_jira|remoteport::47375"  
Also, if NAME field is available in raw events at search time, you should include the subsearch in index search to improve performance.  Like index=foo NAME IN ( [| makeresults | eval sear... See more...
Also, if NAME field is available in raw events at search time, you should include the subsearch in index search to improve performance.  Like index=foo NAME IN ( [| makeresults | eval search="tas1,task2,task3"]) If NAME is populated by some calculation from SPL, you need @richgalloway's full solution.  But I believe you will need format command for the meta keyword search to work.  
I think @thambisetty meant to use sum instead of values.  Also, your seem to desire result be bucketed by hour aligned at the beginning of a calendar hour.  If so, you also need a bin command.   ``... See more...
I think @thambisetty meant to use sum instead of values.  Also, your seem to desire result be bucketed by hour aligned at the beginning of a calendar hour.  If so, you also need a bin command.   ```Below is the SPL you need potentially``` | spath inboundErrorSummary{} | mvexpand inboundErrorSummary{} | spath input=inboundErrorSummary{} | bin _time span=1h@h | chart sum(value) over _time by name   Now, I recently learned fromjson command introduced in Splunk 9.  It makes SPL somewhat easier to read.   | fromjson _raw | mvexpand inboundErrorSummary | spath input=inboundErrorSummary | timechart span=1h@h sum(value) by name   Here, timechart is equivalent to @thambisetty's chart but you do not have to enter a separate bin command.
"111222 host when the instance is R:" is ambiguous.  You should include an illustration of desired results in a question. 1. The most literal interpretation is to only remove the row with host 11122... See more...
"111222 host when the instance is R:" is ambiguous.  You should include an illustration of desired results in a question. 1. The most literal interpretation is to only remove the row with host 111222 AND Instance R:.  In other words, you want Instance host A: 111222 C: 111222 A: 333444 C: 333444 R: 333444 For this, you can do      | where NOT ( host == "111222" AND Instance == "R:")     BTW I don't think you should rename host to Host until everything is done. 2. But your context makes me suspect that you actually mean to remove host 111222 IF Instance R: runs on it and no matter what other instances are there.  In other words, you want Instance R_or_not_R host A: A: C: R: 333444 C: A: C: R: 333444 R: A: C: R: 333444 For this, you need     | eventstats values(Instance) as R_or_not_R by host | where host != "111222" OR R_or_not_R != "R:"​     Which one is it? Here is an emulation   | makeresults format=csv data="host, Instance 111222, A: 111222, C: 111222, R: 333444, A: 333444, C: 333444, R:" ``` data emulation above ```    
I suppose you're talking about this. https://www.stigviewer.com/stig/red_hat_enterprise_linux_7/2017-12-14/finding/V-71849 See the description of the finding.
I am working on a query that lists hosts and their corresponding instances. My results look like the example below.  I want to only remove the 111222 host when the instance is R: from my results. I ... See more...
I am working on a query that lists hosts and their corresponding instances. My results look like the example below.  I want to only remove the 111222 host when the instance is R: from my results. I am not certain on how to do this within my query.  Host Instance 111222 A: 111222 C: 111222 R: 333444 A: 333444 C: 333444 R:
Hi @scelikok    Thank you for your reply.  We will test this option and see if works. Best regards.
I created some reports to populate a dashboard using some reports, but when I have two or 3 reports populating the dashboard everything goes fine but when I have many it doesn't work it doesn't load ... See more...
I created some reports to populate a dashboard using some reports, but when I have two or 3 reports populating the dashboard everything goes fine but when I have many it doesn't work it doesn't load the reports in the dashboard panels. I used it like this:   { "type": "ds.savedSearch", "options": { "ref": "test_consumo_licenca_index_ultimos_30_dias_top_5" } }   Remembering that it loads when there are few reports, but when there are many, it doesn't load. It stays like this forever, the report has already been executed, it just needs to load the information
This XML allowed me to freeze the first column and column title: <row depends="$never$"> <panel> <html> <style> #myTable div [data-view="views/shared/results_table/ResultsTableMaster"] td:nth-child... See more...
This XML allowed me to freeze the first column and column title: <row depends="$never$"> <panel> <html> <style> #myTable div [data-view="views/shared/results_table/ResultsTableMaster"] td:nth-child(1) { position: fixed;; left: 0 !important;; z-index:9 !important;; position: sticky !important; } #myTable div [data-view="views/shared/results_table/ResultsTableMaster"] th:nth-child(1) { position: fixed;; left: 0 !important;; z-index:9 !important;; position: sticky !important; } </style> </html> </panel> </row> Then, use the myTable as the id in the <table> id
Using splunkforwarder-9.0.2-17e00c557dc1.x86_64 on forwarder linux box Using splunk-9.0.4-de405f4a7979.x86_64 on indexer node. From forwarder node I am able to telnet indexer node fine. On forward... See more...
Using splunkforwarder-9.0.2-17e00c557dc1.x86_64 on forwarder linux box Using splunk-9.0.4-de405f4a7979.x86_64 on indexer node. From forwarder node I am able to telnet indexer node fine. On forwarder node splunkd.log, I see below error for s2s negotiation failed.   ERROR AutoLoadBalancedConnectionStrategy [25021 TcpOutEloop] - s2s negotiation failed. response='NULL' ERROR TcpOutputFd [25021 TcpOutEloop] - s2s negotiation failed. response='NULL'  
Thanks for your reply. I'm not 100% sure my assumptions are correct but the config is pretty simple. I'm the admin and I did give the splunk virtual account full permissions. I am searching all index... See more...
Thanks for your reply. I'm not 100% sure my assumptions are correct but the config is pretty simple. I'm the admin and I did give the splunk virtual account full permissions. I am searching all indexes for a specific host. The splunk log do have errors but the only lines associated with the file in question are file is parsed and watched. The UF is collecting eventlogs and UF logs. What are some other factors I can look at?
which wrong file you deleted ? which .conf file you had trouble ? Solution is unclear TBH
We have both Cisco ASA and FTD firewalls.  The ASA is parsing fine where the appropriate fields are extracted.  As for the FTD logs, I don't get the same treatment for the data.  I downloaded the Cis... See more...
We have both Cisco ASA and FTD firewalls.  The ASA is parsing fine where the appropriate fields are extracted.  As for the FTD logs, I don't get the same treatment for the data.  I downloaded the Cisco Firepower Threat Defense FTD sourcetype app and installed it on the search heads because I only had the Splunk Add-on for Cisco ASA.  That didn't change anything.     Mar 1 18:44:20 USxx-xx-FW01 : %ASA-6-302014: Teardown TCP connection 3111698504 for wan:208.87.237.180/8082 to cm-data:12.11.60.44/60113 duration 0:00:00 bytes 327 TCP FINs from wan   Mar 1 13:45:09 MXxx-EG-FTD01 : %FTD-6-302014: Teardown TCP connection 125127915 for CTL_Internet:194.26.135.230/41903 to CTL_Internet:123.243.123.218/33445 duration 0:00:30 bytes 0 Failover primary closed   As you can see the msgs are identical for both FWs but the ASA has lots of interest fields where the FTD only has a few.
In some cases, I have Splunk installed on server builds where /opt/splunk is a mount point and separate disk/LVM.  That is /opt/splunk dir is on a different disk than the OS. Would anyone know if th... See more...
In some cases, I have Splunk installed on server builds where /opt/splunk is a mount point and separate disk/LVM.  That is /opt/splunk dir is on a different disk than the OS. Would anyone know if there are any problems, detaching the LVM from a RHEL 7 box and then attaching the LVM to a RHEL 9 box? Thank you