All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Maximiliano What issue are you having with basic auth? We are still able to use it without any issues. But either way, bets practice is to rather create an API Key and use that to do what you ... See more...
Hi Maximiliano What issue are you having with basic auth? We are still able to use it without any issues. But either way, bets practice is to rather create an API Key and use that to do what you need to do, that is the suggested and best practice way. If you are not sure on how to use the API Key method, just DM me, I can share a sample python script for you to get started Ciao
Hello everyone, I am looking for a way to assign values to variables in order to avoid repetition in my query. I want to search in different resources using the same variables in the same query. I ... See more...
Hello everyone, I am looking for a way to assign values to variables in order to avoid repetition in my query. I want to search in different resources using the same variables in the same query. I have tried the following, but it does not seem to work: | makeresults | eval var_1="var_1_content" | eval var_2="var_2_content" | search (sourcetype=var_1 OR sourcetype=var_2) Could you please help me correct this or provide an alternative approach to achieve this? Thank you for your assistance!
ok so i was able to figure it out but now i have new issue that i don't know even where to start i have a list of alerts that i want to move from one splunk app to another is there a way to do it ... See more...
ok so i was able to figure it out but now i have new issue that i don't know even where to start i have a list of alerts that i want to move from one splunk app to another is there a way to do it with script ? because doing it one by one will take me forever. i have a file with - alert name, alert id, current app name, new app name
Argh. Right. I wrote this offline and forgot that timechart will create separate data points in the same result row. Yes. In this case it's addtotals. (or untable and then eventstats). Just for refe... See more...
Argh. Right. I wrote this offline and forgot that timechart will create separate data points in the same result row. Yes. In this case it's addtotals. (or untable and then eventstats). Just for reference, the untable/eventstats alternative would look somewhat like this: timechart span=1h count by tempo usenull=false | untable _time tempo count | eventstats sum(count) as total by _time | eval percentage=count/total But this will give you separate datapoints for every time/tempo pair so you'd need to xyseries that back to table layout. So @yuanliu 's solution might be indeed more convenient here.
Hi @baiden , $SPLUNK_HOME is the folder where Splunk is installed (or in installation), on Windows, by default, usually is "C:\Program Files\splunk". Ciao. Giuseppe
Hi @BlueQ , I'm not a Linux expert so I don't knpow how to do, but you have two solutions: configure ACLs on your servers to permit to not root user to read root files, insert Splunk in the syste... See more...
Hi @BlueQ , I'm not a Linux expert so I don't knpow how to do, but you have two solutions: configure ACLs on your servers to permit to not root user to read root files, insert Splunk in the system group to read root logs. As I said, you should ask the solution to this requirement to a Linux expert. Ciao. Giuseppe
This was so helpful and fixed my problem, thankyou very much!
@anissabnk  Are you looking for this? XML: <dashboard version="1.1" script="dropdown_via_js.js"> <label>Dropdown Via JS</label> <row> <panel> <html> <div id="mydropdownview">... See more...
@anissabnk  Are you looking for this? XML: <dashboard version="1.1" script="dropdown_via_js.js"> <label>Dropdown Via JS</label> <row> <panel> <html> <div id="mydropdownview"></div> </html> </panel> </row> </dashboard>   JS: require([ "splunkjs/mvc/searchmanager", "splunkjs/mvc/dropdownview", "splunkjs/mvc/simplexml/ready!" ], function(SearchManager, DropdownView) { // Use this search to populate the dropdown with index values let my_search = new SearchManager({ id: "example-search", search: "index=_internal |stats count by source |table source" }); // Instantiate components let my_dropdown = new DropdownView({ id: "example-dropdown", managerid: "example-search", labelField: "source", valueField: "source", el: $("#mydropdownview") }).render(); });       I hope this will help you. Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.    
Bit of a reverse error here, splunk is working when it shouldn't. I followed these instructions to run Splunk as non-root - https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Installlea... See more...
Bit of a reverse error here, splunk is working when it shouldn't. I followed these instructions to run Splunk as non-root - https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Installleastprivileged systemctl stop splunk /opt/splunkforwarder/bin/splunk disable boot-start /opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 1 -user blueq -group blueq systemctl start splunk Splunk is running as this user and the user cannot view /var/log/messages [root@host1 ~]# ps -ef|grep splunk blueq 137095 1 24 14:22 ? 00:00:00 splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd blueq 137134 137095 0 14:22 ? 00:00:00 [splunkd pid=137095] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-runner] root 137154 6813 0 14:22 pts/0 00:00:00 grep --color=auto splunk [root@host1 ~]# ls -l /opt/splunkforwarder/ total 172 drwxr-xr-x. 3 blueq blueq 4096 Jun 25 22:11 bin drwxr-xr-x. 2 blueq blueq 66 Jun 25 22:11 cmake -r--r--r--. 1 blueq blueq 57 Mar 21 09:38 copyright.txt ... [root@host1 ~]# su - blueq Last login: Wed Jul 10 14:24:24 AEST 2024 on pts/0 [blueq@host1 ~]$ ls -l /var/log/messages -rw-------. 1 root root 4898581 Jul 10 14:24 /var/log/messages [blueq@host1 ~]$ cat /var/log/messages cat: /var/log/messages: Permission denied Yet I see no errors in /opt/splunkforwarder/var/log/splunk/splunkd.log and the logs are still uploaded to splunk cloud, why???
You will need to carefully calibrate your result before drawing conclusions.  In other words, compare apples to apples.  Try this exercise: run the following two within the same calendar hour.  First... See more...
You will need to carefully calibrate your result before drawing conclusions.  In other words, compare apples to apples.  Try this exercise: run the following two within the same calendar hour.  First   <somesearch> earliest=-1h@h latest=-0h@h | timechart span=1s avg(host_usage) by host useother=true | addtotals | table _time Total *   Then this   <somesearch> earliest=-1h@h latest=-0h@h | timechart span=1s avg(host_usage) by host useother=true limit=5 | addtotals | table _time Total *   What the earliest and latest do in this exercise is to eliminate any bucket error. (I am curious what use case could warrant timechart with span 1s.)  The table command is so you can easily compare Total in each row. When I test this method, Total does not change when I set limit.  Here is the test set:   index=_audit earliest=-1d@d latest=-0d@d | timechart span=4h count by action useother=true | addtotals | table _time Total *   _time Total add delete expired_session_token login_attempt quota read_session_token search update validate_token 2024-07-07 21:00 1592 0 0 14 0 1 787 3 0 787 2024-07-08 01:00 453 0 0 13 1 8 199 33 0 199 2024-07-08 05:00 212 0 0 3 1 8 95 10 0 95 2024-07-08 09:00 1965 0 0 9 2 8 964 14 4 964 2024-07-08 13:00 6508 0 0 22 2 10 3216 34 7 3217 2024-07-08 17:00 7059 1 1 0 0 0 3519 16 3 3519 2024-07-08 21:00 4966 0 0 0 0 0 2478 10 0 2478   index=_audit earliest=-1d@d latest=-0d@d | timechart span=4h count by action useother=true limit=3 | addtotals | table _time Total *   _time Total OTHER read_session_token search validate_token 2024-07-07 21:00 1592 15 787 3 787 2024-07-08 01:00 453 22 199 33 199 2024-07-08 05:00 212 12 95 10 95 2024-07-08 09:00 1965 23 964 14 964 2024-07-08 13:00 6508 41 3216 34 3217 2024-07-08 17:00 7059 5 3519 16 3519 2024-07-08 21:00 4966 0 2478 10 2478 As you can see, the Total column in the two output are exactly identical. Suggestion: If your event density is extremely high (given that you are using 1s time bucket), you can use snap-to anchor ("@", see Specify a snap to time unit) to avoid indeterministic time bucket   <somesearch> | timechart span=1s@s avg(host_usage) by host useother=true | addtotals <somesearch> | timechart span=1s@s avg(host_usage) by host useother=true limit=5 | addtotals    
eventstats is not the answer; addtotals is. | eval tempo= case( 'netPerf.netOriginLatency'<2000, "Under 2s", 'netPerf.netOriginLatency'>2000 AND 'netPerf.netOriginLatency'<3000, "Between 2s and 3s",... See more...
eventstats is not the answer; addtotals is. | eval tempo= case( 'netPerf.netOriginLatency'<2000, "Under 2s", 'netPerf.netOriginLatency'>2000 AND 'netPerf.netOriginLatency'<3000, "Between 2s and 3s", 'netPerf.netOriginLatency'>3000, "Above 3s") | timechart span=1h count by tempo usenull=false | addtotals fieldname=_total | foreach * [eval <<FIELD>> = if(isnull(_total), null(), round('<<FIELD>>'/_total * 100, 1) . " %")]  
You are correct; in my first comment, I even emulated the illustrated event in OP that contains a key named "message".  In all cases, this key should have been extracted into Splunk field "message" w... See more...
You are correct; in my first comment, I even emulated the illustrated event in OP that contains a key named "message".  In all cases, this key should have been extracted into Splunk field "message" without any processing, and @gauravkumar85's search should succeed (without that spurious spath).  @gauravkumar85 do you get "message" field extracted in these two events?  It would be much better if you simply copies the exact raw events (use code box, no emojis) that you think are causing problem.
Great, thanks. Could you tell me what you did there to get that?
Hi @Srini_551 .. ya, the excel got some pretty cool features.  but to inherit these features in Splunk dashboards, it would take a lot of Splunk Dashboarding skills and lot of time programming this.... See more...
Hi @Srini_551 .. ya, the excel got some pretty cool features.  but to inherit these features in Splunk dashboards, it would take a lot of Splunk Dashboarding skills and lot of time programming this.  at the end, we may think, ... "is it that much worth ?!?! " 
Hi @smineo .. Rich's rex working perfectly..   | makeresults |eval log="/opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8e4567c0-9f3a-25a1-a22d-e6b3744... See more...
Hi @smineo .. Rich's rex working perfectly..   | makeresults |eval log="/opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8e4567c0-9f3a-25a1-a22d-e6b3744559a52| 123.45.678.123 | | this-value-here| SAML20| node1-1.nodeynode.things.svc.cluster.local| IdP| success| yadayadaAdapter| | 285" | rex field=log "\| \|(?<value>[^\|]+)" |table log value  
Hello - wanted to ask if anyone happens to know the best approach (recommended by Cisco) for monitoring an AWS RDS SQL Server instance with the AppDynamics controller type being SaaS/Cloud hosted. Th... See more...
Hello - wanted to ask if anyone happens to know the best approach (recommended by Cisco) for monitoring an AWS RDS SQL Server instance with the AppDynamics controller type being SaaS/Cloud hosted. The documentation for AppDynamics isn't quite clear and is it correct to assume that the best approach is to provision an EC2 instance (or AWS workspace) in my AWS environment with the appropriate VPC / RDS security group settings and install an agent? The EC2 instance or AWS workspace would connect to the RDS instance. If anyone has a step-by-step guide that they can share that would be greatly appreciated.  Thanks!  
I quick test in regex101.com produced this regular expression. \| \|(?<value>[^\|]+)\| SAML20
Hi, I have a search result with the field message.log, and the field contains this example pattern   /opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8... See more...
Hi, I have a search result with the field message.log, and the field contains this example pattern   /opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8e4567c0-9f3a-25a1-a22d-e6b3744559a52| 123.45.678.123 | | this-value-here| SAML20| node1-1.nodeynode.things.svc.cluster.local| IdP| success| yadayadaAdapter| | 285 I'd like to rex "this-value-here" which is always preceded by the pattern pipe-space-pipe-space and always followed by pipe-space-SAML20. Having trouble with the rex expression, appreciate the assistance.
Did you ever find a solution to this? I'm experiencing the same issue with my addon on a search head cluster.
Hi @baiden .. on my win11 laptop, i am able to install Splunk 9.2.2..