All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @baiden .. on my win11 laptop, i am able to install Splunk 9.2.2.. 
Works beautifully. Thank you!
Thanks for the reply!   I tried using the second eval to automatically calculate the percentage but it doesn't seem to be working, it is only showing the count for each field on the first eval co... See more...
Thanks for the reply!   I tried using the second eval to automatically calculate the percentage but it doesn't seem to be working, it is only showing the count for each field on the first eval command. | eval tempo= case( 'netPerf.netOriginLatency'<2000, "Under 2s", 'netPerf.netOriginLatency'>2000 AND 'netPerf.netOriginLatency'<3000, "Between 2s and 3s", 'netPerf.netOriginLatency'>3000, "Above 3s") | timechart span=1h count by tempo usenull=false | eventstats sum(count) as total by _time | eval percentage=count/total _time Above 3s Between 2s and 3s Under 2s 08/07/2024 00:00 109 588 19307 08/07/2024 01:00 113 530 14900 08/07/2024 02:00 6 128 5450 08/07/2024 03:00 22 122 2847   But this already helps, I can extract the results in csv and calculate the percentage on excel.   Appreciate the quick reply!
Instead of your top command do | timechart span=1h count by tempo to have counts of each tempo. Now the thing is to count the percentage. In order to do so you need to find how many events you hav... See more...
Instead of your top command do | timechart span=1h count by tempo to have counts of each tempo. Now the thing is to count the percentage. In order to do so you need to find how many events you have in total for each hour | eventstats sum(count) as total by _time (remember that _time is binned to full hour due to timechart). So now you can just calculate your percentage | eval percentage=count/total
In the Splunk Add-on for ServiceNow, how do you update the description field in Service now? or is there a way to add in a description field?  is it something I need to add in Custom fields to create... See more...
In the Splunk Add-on for ServiceNow, how do you update the description field in Service now? or is there a way to add in a description field?  is it something I need to add in Custom fields to create it?    
Sounds like you might want to use two bin commands. First bin by time: | bin _time span=1h   Then bin the netPerf.netOriginLatency into 5 (?) bins e.g. | bin netPerf.netOriginLatency bin... See more...
Sounds like you might want to use two bin commands. First bin by time: | bin _time span=1h   Then bin the netPerf.netOriginLatency into 5 (?) bins e.g. | bin netPerf.netOriginLatency bins=5 See the bin command https://docs.splunk.com/Documentation/Splunk/9.2.2/SearchReference/Bin Finally you could do a timechart with your bins (you will have to do your percentage etc calculation)
1. Just because a user has permissions to read a list of directory contents does not mean it will be able to read individua, files. 2. Again - selinux isssues? Is your selinux in enforcing mode? Hav... See more...
1. Just because a user has permissions to read a list of directory contents does not mean it will be able to read individua, files. 2. Again - selinux isssues? Is your selinux in enforcing mode? Have you checked your auditd.log for selinux denied access attempts?
Firstly, check with tcpdump that your events do reach your destination host. Also - it's not recommended to use a network input directly on splunk component. There are other options of ingesting sys... See more...
Firstly, check with tcpdump that your events do reach your destination host. Also - it's not recommended to use a network input directly on splunk component. There are other options of ingesting syslog data. And I would _not_ use a rolling release type distro like Centos Stream for prod use. But that's just me,
Thank you for the answer. When i run the list inputstatus from bin, I received the  output below; I have verified that Splunkfwd has read access to the var/log
We have been using Splunk on a Windows server without issue.  It ingested logs from Vmware hosts, networking hardware, firewalls, Windows events, etc. We created a new Splunk instance on CentOS Stre... See more...
We have been using Splunk on a Windows server without issue.  It ingested logs from Vmware hosts, networking hardware, firewalls, Windows events, etc. We created a new Splunk instance on CentOS Stream 9.  It runs as the Splunk user, so it couldn't use the udp data input of 514.  We set it to 10514 and did port forwarding to get around that.  That works for everything except our VMware hosts.  The logging from them will not show up in the new Splunk server.  All the other devices/logs that want to send on udp 514 show up in Splunk. The value on the VMware hosts that always worked before was:  udp://xxx.xxx.xxx.xxx:514.  We tried the same with 10514 to no avail.  Is there an issue with receiving logs from VMware hosts and having port forwarding send the data to a different port?    
True. Strictly theoretically, you could use the same deployer to deploy apps to multiple SHCs but they would have to not only have the same apps but also the same push modes, shared secret and so on.... See more...
True. Strictly theoretically, you could use the same deployer to deploy apps to multiple SHCs but they would have to not only have the same apps but also the same push modes, shared secret and so on. Generally, it's much more trouble than it's worth. Just stick to a deployer per SHC. Especially considering that deployer instances don't need to be big.
Thank you. Yes the UF was  restarted. The _internal logs did not have the monitored path in the logs . We also checked the permissions for the var/log and splunkuf had a read access to the file. we t... See more...
Thank you. Yes the UF was  restarted. The _internal logs did not have the monitored path in the logs . We also checked the permissions for the var/log and splunkuf had a read access to the file. we tested by logging in as the splunkuf and we were able to see the content of the files. The Logs are still not showing up in the index. I have checked on the index' configuration and it all checks out with nothing different from the other indexes.  
This is not exactly true. You could have several SHCs which are using the same deployer, BUT then all those must have same apps! So you cannot have on Deployer and several SHCs with different apps.
Hello! I'm trying to separate the latency results with Eval by dividing in 3 categories and then showing the percentage using the Top command. This was working for the beginning of the project bu... See more...
Hello! I'm trying to separate the latency results with Eval by dividing in 3 categories and then showing the percentage using the Top command. This was working for the beginning of the project but now I need to separated the results by hour instead of the whole day and including the Table command and using the fields from Eval is not working.   Here's my search | eval tempo= case( 'netPerf.netOriginLatency'<2000, "Under 2s", 'netPerf.netOriginLatency'>2000 AND 'netPerf.netOriginLatency'<3000, "Between 2s and 3s", 'netPerf.netOriginLatency'>3000, "Above 3s" ) | top 0 tempo Latency count percent Under 2s 74209 86.5 % Between 2s and 3s 10736 12.5 % Above 3s 803 0.9 %   Ideal scenario would be something like this: _time Under 2s Between 2s and 3s Above 3s 06/07/2024 00:00 97.3 % 2.3 % 2.3 % 06/07/2024 01:00 96.3 % 2.7 % 1.0 %   Appreciate the time and help!
And it’s essential that RF = SF on before and after SmartStore is enabled. If not then there is possibility that bucket which is copied to S3 doesn’t contains searchable metadata and then it’s not wo... See more...
And it’s essential that RF = SF on before and after SmartStore is enabled. If not then there is possibility that bucket which is copied to S3 doesn’t contains searchable metadata and then it’s not working layter!
If you can define which line contains headers and which values, then you can do this with any countable columns. It’s enough to known how many columns you could maximum have.
It's a bit more complicated than that. Hot buckets are normally streamed to other peers to meet RF/SF. But when the bucket rolls to warm, it's uploaded to smartstore (which then takes care of data r... See more...
It's a bit more complicated than that. Hot buckets are normally streamed to other peers to meet RF/SF. But when the bucket rolls to warm, it's uploaded to smartstore (which then takes care of data resilience) and local copies can be evicted to make room for frequently used cached buckets. So initially you might get RF/SF meeting number of copies for a warm buckets but at any time the cache manager can decide to evict such warm copy untill it will be needed - then it will be re-downloaded from smartstore. But yes, for warm buckets there is no longer replication between peers to meet RF/SF. And each bucket is just copied once to smartstore and it's smartstore's responsibility to make sure that bucket is available.
Ahh. OK. That wasn't clear. I thought that maybe there's some "practice" environment with that training. Anyway, you can look for your data by doing either what @marnall said or do a quick summary ... See more...
Ahh. OK. That wasn't clear. I thought that maybe there's some "practice" environment with that training. Anyway, you can look for your data by doing either what @marnall said or do a quick summary | tstats count min(_time) as earliest max(_time) as latest where index IN (*,_*) by index sourcetype | convert ctime(earliest) ctime(latest) to see when and where your data is.  (the underscore-beginning Splunk's internal indexes are just to show you what it should look like). Run this search over All Time
With SmartStore, replication and search factors only apply to hot buckets on the local machines. The idea is that the remote storage takes responsibility for the high availability and redundancy of t... See more...
With SmartStore, replication and search factors only apply to hot buckets on the local machines. The idea is that the remote storage takes responsibility for the high availability and redundancy of the buckets. Thus, even adjusting your RF, it will not change the number of bucket copies on the remote storage.
What happens if you search: index=* ...and set the time to "All Time"? This search should get all non-hidden logs in your Splunk indexes. Hopefully you get logs from several sourcetypes, and you c... See more...
What happens if you search: index=* ...and set the time to "All Time"? This search should get all non-hidden logs in your Splunk indexes. Hopefully you get logs from several sourcetypes, and you can click on the sourcetypes in the fields column on the list and hopefully find the one you specified when you onboarded your logs. If your sourcetype does not appear, then it is likely that something went wrong with the onboarding.