All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

One old post about this https://community.splunk.com/t5/Alerting/How-to-detect-when-a-host-stop-sending-data-to-Splunk/m-p/563571
Hi All, i want a query to check and fire an alert when , there are no logs from a server past 30 min. For example we have different isnatnce running on a host and i want an alert when there are no ... See more...
Hi All, i want a query to check and fire an alert when , there are no logs from a server past 30 min. For example we have different isnatnce running on a host and i want an alert when there are no logs coming from serevr in past 30 min(because server instances are not running) .So we we dont see any logs from server past 30 min and alert shoul notfiy that server instances are stopped.Please help. Logs below event. 3/1/24 12:26:07.000 PM   www 89589 0 0.0 00:00:02 0.1 51784 2151496 ? S 35:31 httpd -d_/sys_apps_01/apache/server20Cent/versions/server2.4.56_-f_/sys_apps_01/apache/server20Cent/conf/MTF.AEM.conf host = www2stl52 source = ps sourcetype = ps
Hi Rajesh, Thank you for the reply to the above doubt. I would also like to know on what is the max number of volume a machine agent can observe? i.e. if a server has for example 300 Volumes/Storage... See more...
Hi Rajesh, Thank you for the reply to the above doubt. I would also like to know on what is the max number of volume a machine agent can observe? i.e. if a server has for example 300 Volumes/Storage disk. Will the machine agent be able to monitor them all or Does the limit end at around 50-60 volumes? Regards, Shashwat
Is splunk forwarder agent 9.2.0.1 supported on Amazon Linux 2023 x86/arm OS using RPM file.  Got error while starting splunk service.  tcp_conn_open_afux ossocket_connect failed with No such file o... See more...
Is splunk forwarder agent 9.2.0.1 supported on Amazon Linux 2023 x86/arm OS using RPM file.  Got error while starting splunk service.  tcp_conn_open_afux ossocket_connect failed with No such file or directory tcp_conn_open_afux ossocket_connect failed with No such file or directory tcp_conn_open_afux ossocket_connect failed with No such file or directory
Hi, Why my CIDR matching in not following the lookup content? Query i used is as below: | makeresults | eval ip="10.10.10.10" | lookup testip ip OUTPUTNEW description Result should look... See more...
Hi, Why my CIDR matching in not following the lookup content? Query i used is as below: | makeresults | eval ip="10.10.10.10" | lookup testip ip OUTPUTNEW description Result should look like this: ip Description 10.10.10.10 New   But the real output look like this: ip Description 10.10.10.10 New In Progress Closed   I have check my lookup and its clearly state the Description for IP Range 10.10.10.10/27 is "New". Please help and thanks!  
This helped, there were few more errors in the python which I fixed too. Thanks 
Hi @asabatini, I think the problem is name-capturing groups in REGEX. Using name-capturing groups will already create a field without a FORMAT parameter.  You can try one of the below options; Usi... See more...
Hi @asabatini, I think the problem is name-capturing groups in REGEX. Using name-capturing groups will already create a field without a FORMAT parameter.  You can try one of the below options; Using name-capturing groups in REGEX; [group1] REGEX = (?<group1>.+\s\-\s\-\s\-\s).*.auditID.:.(?<group2>[\w-]+)..*requestURI.:.(?<group4>[^,]+).+username.:.(?<group5>[^,]+).+sourceIPs....(?<group3>\d+.\d+.\d+.\d+) Without name-capturing groups in REGEX; [group1] REGEX = (.+\s\-\s\-\s\-\s).*.auditID.:.([\w-]+)..*requestURI.:.([^,]+).+username.:.([^,]+).+sourceIPs....(\d+.\d+.\d+.\d+) FORMAT = group1::$1, group2::$2, group5::$3, group3::$4, group4::$5  
Thanks. Noted sir.
Glad you can work around this issue!  Anyway, I run a spare instance on my laptop always for simple matters that can be emulated.  You may consider the same. (You can also observe how versions may af... See more...
Glad you can work around this issue!  Anyway, I run a spare instance on my laptop always for simple matters that can be emulated.  You may consider the same. (You can also observe how versions may affect these.) I wish I learned how to set up KV store on my laptop so I could help more.  But KV Store vs. CSV files has no indication that time-based lookup should function differently.  So you would have a support case except 7 might be out of support.
That makes sense and I was wanting to create some additional fields for the output and was getting hung up on the usage of | stats and had to switch it to | eventstats to retain _raw data for the res... See more...
That makes sense and I was wanting to create some additional fields for the output and was getting hung up on the usage of | stats and had to switch it to | eventstats to retain _raw data for the rest of the code after the stats/eventstats. You have helped me before PickleRick and always provide good info !!! Works like a charm, thanks again ! | rex field=_raw "Batch::(?<aJobName>[^\s]*)" | eval aStatus=case( searchmatch("START of script"), "Start", searchmatch("COMPLETED OK"), "End", searchmatch("ABORTED, exiting with status"), "End", true(),null() ) | eventstats values(aStatus) as aStateList by aJobName | where aStateList != "End" |........
It would help to have a sample (sanitized) event to work with. Avoid lookbehind and lookahead in Splunk.  They're costly and rarely necessary.  Try on\s(?<HostName>\S*)\sby Firewall Settings
It depends on what's wrong with it.  Tell us more or contact Splunk Support.
I am getting an error when using the following regex (?<=on\s)(.*)(?=\sby Firewall Settings) The error is "Error in 'rex' command: regex="(?<=on\s)(.*)(?<HostName>.*)(?=\sby Firewall Settings)"... See more...
I am getting an error when using the following regex (?<=on\s)(.*)(?=\sby Firewall Settings) The error is "Error in 'rex' command: regex="(?<=on\s)(.*)(?<HostName>.*)(?=\sby Firewall Settings)" has exceeded configured match_limit, consider raising the value in limits.conf." Is there a better way to do this,  I am trying to find all text between "on " and " by Firewall Settings.  It works in regex101.com, but I get that error in Splunk.   TIA!  
One way is using SEDCMD.  Add this to the appropriate props.conf file: [mysourcetype] SEDCMD-rmJSONprefix = s/^[^\{]+?//  
Hello all, I'm bringing data into Splunk as json but it coming bold text in front that throw off the json.  Any suggestion on regx to remove the bold text? <165>Feb 29 19:06:30 server01 darktra... See more...
Hello all, I'm bringing data into Splunk as json but it coming bold text in front that throw off the json.  Any suggestion on regx to remove the bold text? <165>Feb 29 19:06:30 server01 darktrace {"hostname":"ss-26138-03","label":"","ip_address":"10.21.32.88","child_id":null,"name":"age_alert-inaccessible_ui","priority":61,"priority_level":"high","alert_name":"Datatrace / Email: Inaccessible UI","status":"Resolved","message":"The UI is inaccessible, this could be the result of a misconfiguration or network error.","last_updated":1709233590.814423,"last_updated_status":1709233590.814423,"acknowledge_time":null,"acknowledge_timeout":null,"uuid":"1111114d-6e72-4029-8ac2-5d051be02ad5","url":"https://server01/sysstatus?alert=1481514d-6e72-4029-8ac2-5d051be02ad5","creationTime":1709233590814}  
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf#KEYS: SOURCE_KEY = MetaData:Source BTW, you don't need fields.conf on the HF. You need it on SH.
There is no posibility that you have two identically named files in one directory. Maybe one has a typo in its name, maybe the case of the letters in the name is wrong. We don't know but check again.... See more...
There is no posibility that you have two identically named files in one directory. Maybe one has a typo in its name, maybe the case of the letters in the name is wrong. We don't know but check again. (Hint - wrong named file won't be used and values in it won't get encrypted on first read).
Yep. You're overthinking it a bit. Either you have a field containing the job state (Starting/Completed) or you can create one by | eval state=case(searchmatch("Starting",_raw),"Starting",searchmatc... See more...
Yep. You're overthinking it a bit. Either you have a field containing the job state (Starting/Completed) or you can create one by | eval state=case(searchmatch("Starting",_raw),"Starting",searchmatch("Completed"),"Completed",1=1,null()) Then you need to check the state for each separate job | stats values(state) as states by whatever_id_you_have_for_each_job (If you want to retain the jobname, which I assume is a more general clasifier than a single job identifier, add values(aJobName) to that stats command. Then you can filter to see only non-finished jobs by | where NOT states="Completed" Keep in mind that matching multivalued fields can be a bit unintuitive at first.
You can try to align the _time field with bin command and then match events by exactly the same value of that field (you can leave the original value for reference of course). Or you can use the tra... See more...
You can try to align the _time field with bin command and then match events by exactly the same value of that field (you can leave the original value for reference of course). Or you can use the transaction command (generally, transaction should be avoided since it's relatively resource intensive and has its limitations but sometimes it's the only reasonable solution).
@kiran_panchavatPlease stop spreading misinformation (especially created by generative language models). The summaryindex command is an alias for the collect command. There is absolutely no differen... See more...
@kiran_panchavatPlease stop spreading misinformation (especially created by generative language models). The summaryindex command is an alias for the collect command. There is absolutely no difference in behaviour of those two commands since they're the same command which can be called with either name. This is just my speculation but I suspect the command was originally called summaryindex because it was meant to collect data for summary indexing but was later "generalized" to the "collect" name which is the current command name in docs and the "summaryindex" command name was retained for backward compatibility reasons.