All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Great, thanks. Could you tell me what you did there to get that?
Hi @Srini_551 .. ya, the excel got some pretty cool features.  but to inherit these features in Splunk dashboards, it would take a lot of Splunk Dashboarding skills and lot of time programming this.... See more...
Hi @Srini_551 .. ya, the excel got some pretty cool features.  but to inherit these features in Splunk dashboards, it would take a lot of Splunk Dashboarding skills and lot of time programming this.  at the end, we may think, ... "is it that much worth ?!?! " 
Hi @smineo .. Rich's rex working perfectly..   | makeresults |eval log="/opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8e4567c0-9f3a-25a1-a22d-e6b3744... See more...
Hi @smineo .. Rich's rex working perfectly..   | makeresults |eval log="/opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8e4567c0-9f3a-25a1-a22d-e6b3744559a52| 123.45.678.123 | | this-value-here| SAML20| node1-1.nodeynode.things.svc.cluster.local| IdP| success| yadayadaAdapter| | 285" | rex field=log "\| \|(?<value>[^\|]+)" |table log value  
Hello - wanted to ask if anyone happens to know the best approach (recommended by Cisco) for monitoring an AWS RDS SQL Server instance with the AppDynamics controller type being SaaS/Cloud hosted. Th... See more...
Hello - wanted to ask if anyone happens to know the best approach (recommended by Cisco) for monitoring an AWS RDS SQL Server instance with the AppDynamics controller type being SaaS/Cloud hosted. The documentation for AppDynamics isn't quite clear and is it correct to assume that the best approach is to provision an EC2 instance (or AWS workspace) in my AWS environment with the appropriate VPC / RDS security group settings and install an agent? The EC2 instance or AWS workspace would connect to the RDS instance. If anyone has a step-by-step guide that they can share that would be greatly appreciated.  Thanks!  
I quick test in regex101.com produced this regular expression. \| \|(?<value>[^\|]+)\| SAML20
Hi, I have a search result with the field message.log, and the field contains this example pattern   /opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8... See more...
Hi, I have a search result with the field message.log, and the field contains this example pattern   /opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8e4567c0-9f3a-25a1-a22d-e6b3744559a52| 123.45.678.123 | | this-value-here| SAML20| node1-1.nodeynode.things.svc.cluster.local| IdP| success| yadayadaAdapter| | 285 I'd like to rex "this-value-here" which is always preceded by the pattern pipe-space-pipe-space and always followed by pipe-space-SAML20. Having trouble with the rex expression, appreciate the assistance.
Did you ever find a solution to this? I'm experiencing the same issue with my addon on a search head cluster.
Hi @baiden .. on my win11 laptop, i am able to install Splunk 9.2.2.. 
Works beautifully. Thank you!
Thanks for the reply!   I tried using the second eval to automatically calculate the percentage but it doesn't seem to be working, it is only showing the count for each field on the first eval co... See more...
Thanks for the reply!   I tried using the second eval to automatically calculate the percentage but it doesn't seem to be working, it is only showing the count for each field on the first eval command. | eval tempo= case( 'netPerf.netOriginLatency'<2000, "Under 2s", 'netPerf.netOriginLatency'>2000 AND 'netPerf.netOriginLatency'<3000, "Between 2s and 3s", 'netPerf.netOriginLatency'>3000, "Above 3s") | timechart span=1h count by tempo usenull=false | eventstats sum(count) as total by _time | eval percentage=count/total _time Above 3s Between 2s and 3s Under 2s 08/07/2024 00:00 109 588 19307 08/07/2024 01:00 113 530 14900 08/07/2024 02:00 6 128 5450 08/07/2024 03:00 22 122 2847   But this already helps, I can extract the results in csv and calculate the percentage on excel.   Appreciate the quick reply!
Instead of your top command do | timechart span=1h count by tempo to have counts of each tempo. Now the thing is to count the percentage. In order to do so you need to find how many events you hav... See more...
Instead of your top command do | timechart span=1h count by tempo to have counts of each tempo. Now the thing is to count the percentage. In order to do so you need to find how many events you have in total for each hour | eventstats sum(count) as total by _time (remember that _time is binned to full hour due to timechart). So now you can just calculate your percentage | eval percentage=count/total
In the Splunk Add-on for ServiceNow, how do you update the description field in Service now? or is there a way to add in a description field?  is it something I need to add in Custom fields to create... See more...
In the Splunk Add-on for ServiceNow, how do you update the description field in Service now? or is there a way to add in a description field?  is it something I need to add in Custom fields to create it?    
Sounds like you might want to use two bin commands. First bin by time: | bin _time span=1h   Then bin the netPerf.netOriginLatency into 5 (?) bins e.g. | bin netPerf.netOriginLatency bin... See more...
Sounds like you might want to use two bin commands. First bin by time: | bin _time span=1h   Then bin the netPerf.netOriginLatency into 5 (?) bins e.g. | bin netPerf.netOriginLatency bins=5 See the bin command https://docs.splunk.com/Documentation/Splunk/9.2.2/SearchReference/Bin Finally you could do a timechart with your bins (you will have to do your percentage etc calculation)
1. Just because a user has permissions to read a list of directory contents does not mean it will be able to read individua, files. 2. Again - selinux isssues? Is your selinux in enforcing mode? Hav... See more...
1. Just because a user has permissions to read a list of directory contents does not mean it will be able to read individua, files. 2. Again - selinux isssues? Is your selinux in enforcing mode? Have you checked your auditd.log for selinux denied access attempts?
Firstly, check with tcpdump that your events do reach your destination host. Also - it's not recommended to use a network input directly on splunk component. There are other options of ingesting sys... See more...
Firstly, check with tcpdump that your events do reach your destination host. Also - it's not recommended to use a network input directly on splunk component. There are other options of ingesting syslog data. And I would _not_ use a rolling release type distro like Centos Stream for prod use. But that's just me,
Thank you for the answer. When i run the list inputstatus from bin, I received the  output below; I have verified that Splunkfwd has read access to the var/log
We have been using Splunk on a Windows server without issue.  It ingested logs from Vmware hosts, networking hardware, firewalls, Windows events, etc. We created a new Splunk instance on CentOS Stre... See more...
We have been using Splunk on a Windows server without issue.  It ingested logs from Vmware hosts, networking hardware, firewalls, Windows events, etc. We created a new Splunk instance on CentOS Stream 9.  It runs as the Splunk user, so it couldn't use the udp data input of 514.  We set it to 10514 and did port forwarding to get around that.  That works for everything except our VMware hosts.  The logging from them will not show up in the new Splunk server.  All the other devices/logs that want to send on udp 514 show up in Splunk. The value on the VMware hosts that always worked before was:  udp://xxx.xxx.xxx.xxx:514.  We tried the same with 10514 to no avail.  Is there an issue with receiving logs from VMware hosts and having port forwarding send the data to a different port?    
True. Strictly theoretically, you could use the same deployer to deploy apps to multiple SHCs but they would have to not only have the same apps but also the same push modes, shared secret and so on.... See more...
True. Strictly theoretically, you could use the same deployer to deploy apps to multiple SHCs but they would have to not only have the same apps but also the same push modes, shared secret and so on. Generally, it's much more trouble than it's worth. Just stick to a deployer per SHC. Especially considering that deployer instances don't need to be big.
Thank you. Yes the UF was  restarted. The _internal logs did not have the monitored path in the logs . We also checked the permissions for the var/log and splunkuf had a read access to the file. we t... See more...
Thank you. Yes the UF was  restarted. The _internal logs did not have the monitored path in the logs . We also checked the permissions for the var/log and splunkuf had a read access to the file. we tested by logging in as the splunkuf and we were able to see the content of the files. The Logs are still not showing up in the index. I have checked on the index' configuration and it all checks out with nothing different from the other indexes.  
This is not exactly true. You could have several SHCs which are using the same deployer, BUT then all those must have same apps! So you cannot have on Deployer and several SHCs with different apps.