All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think what's happening is the kv store replication isn't consistent. Lookups that use CSVs are, but not kvstores. I'm trying to figure out how to troubleshoot this. @PickleRick- it only appears to... See more...
I think what's happening is the kv store replication isn't consistent. Lookups that use CSVs are, but not kvstores. I'm trying to figure out how to troubleshoot this. @PickleRick- it only appears to be happening with a integer (number) field. String fields are working fine, and not included in the lispy. It seems like the search is assuming the integer field is indexed, but like you said, it isn't. Can you explain that more?
@tv00638481 - Have you got any fix for this issue. I too facing the same. PRA logs not getting updated into splunk.
You can obfuscate fields with the SEDCMD directive if you know which fields hold the PCI and PAN. Ideally, PCI and PAN should not be in logs which are stored in logs - you should go back to your appl... See more...
You can obfuscate fields with the SEDCMD directive if you know which fields hold the PCI and PAN. Ideally, PCI and PAN should not be in logs which are stored in logs - you should go back to your application developers to remove these before they even reach Splunk.
OK a bit random - why have you used appendpipe?
Assuming your real events don't have brackets in the names, try something like this | rex "Example \(\[(?<keys>[^\]]*)\]\s*,\s*\[(?<values>[^\]]*)\]\)" | rex max_match=0 field=keys "'(?<key>[^']+)'"... See more...
Assuming your real events don't have brackets in the names, try something like this | rex "Example \(\[(?<keys>[^\]]*)\]\s*,\s*\[(?<values>[^\]]*)\]\)" | rex max_match=0 field=keys "'(?<key>[^']+)'" | rex max_match=0 field=values "'(?<value>[^']+)'" | table key value | eval pairs=mvzip(key, value, "=")
hi @ITWhisperer it doesn't work. I've tried something like this using Appendpipe command. I can see results. For eg: Vendor-1      10 Vendor-2      10  All                    20   |inputlook... See more...
hi @ITWhisperer it doesn't work. I've tried something like this using Appendpipe command. I can see results. For eg: Vendor-1      10 Vendor-2      10  All                    20   |inputlookup filename.csv | stats count by Vendor | appendpipe [| stats sum(count) as count | eval Vendor="All"]   But when I select All from drop-down, values are not showing in the single value. However when I select Vendor-1 values are displayed. How to fix this issue?
Team I just was able to create a search in Splunk to detect Credit Card numbers. PCI was also onboarded into our new Splunk Cloud instance. How can we obscure these numbers once found and verified t... See more...
Team I just was able to create a search in Splunk to detect Credit Card numbers. PCI was also onboarded into our new Splunk Cloud instance. How can we obscure these numbers once found and verified to be in fact an exposed user credit card number?
This is not giving results which needed , I see 0 for each entry from lookup     
Thanks for your time , I see the data which is coming 0 for each entry coming from lookup but it should give only value 0 for the host which is not sending events .    
I have a field message that when I run the search index=example123 host=5566 |search "*specials word*" I table message it displays as an example below:  2024-08-02 16:45:21- INFO Example ([... See more...
I have a field message that when I run the search index=example123 host=5566 |search "*specials word*" I table message it displays as an example below:  2024-08-02 16:45:21- INFO Example (['test1' , 'test2', 'test3', 'test4', 'test5', 'test6', 'test7)'] , ['Medium', 'Large ', 'Small', 'Small ', 'Large ', 'Large ', 'Large ']) Is there a way to run a command so that the data in the field "Message"  can be extracted into their own fields or displayed like this matching 1:1 on a table  test1           test2       test3        test4         test5           test6          test7 Medium     Large      Small        Small         Large        Large          Large or test1 = Medium  test2= Large  test3 = Small .... ect  
What do you mean by data labs, can you please provide for the Splunk community a better posed question. What is you set up, is datalabs a provider of some sort of data? Is there a TA for for this o... See more...
What do you mean by data labs, can you please provide for the Splunk community a better posed question. What is you set up, is datalabs a provider of some sort of data? Is there a TA for for this or are you using the DB connect app if supported Did this work before, what changed. Have you developed a custom dashboard with panels that searches the data.   ------------------------------------------------------- NOTE: Data is presented in the dashboards by the way of searches (SPL code) which is then set up inside of the dashboards. Data is typically indexed into an index, Data model, index, summary index, or accelerated report and this comes from Splunk many different inputs methods.  I would suggest start by looking at the dashboards SPL (Search code)  and troubleshoot from there.  Look at the index (index=my_index) example and then workout where the inputs is coming from. The TA's normally have some kind of inputs that perform collection every so often, if used you need to check the settings.        
There are certain data labs created while the data stops indexing from some data labs, It should index data after every 15 minutes but that’s not happening. And those data gets reflected on the Splun... See more...
There are certain data labs created while the data stops indexing from some data labs, It should index data after every 15 minutes but that’s not happening. And those data gets reflected on the Splunk dashboard. Can any one assist or suggest why the data for some datalabs are not indexing every 15 mins?
What version are you running of Splunk? Have you tested on lower versions?
Try something like this earliest="1/1/2024:00:00:00" | bin span=1h _time | addinfo | eval marker = if(_time < info_min_time + 60*24*3600, "January","Febuary")| eval _time = if(_time < info_min_time... See more...
Try something like this earliest="1/1/2024:00:00:00" | bin span=1h _time | addinfo | eval marker = if(_time < info_min_time + 60*24*3600, "January","Febuary")| eval _time = if(_time < info_min_time + 60*24*3600, _time + 60*24*3600, _time) | timechart count max(data) by marker span=1h | timewrap 1mon
Hello Everyone, I want to integrate Power BI with Splunk and view Power BI logs in Splunk for analysis. Can someone explain how to integrate Power BI with Splunk and how to get logs from Power BI in... See more...
Hello Everyone, I want to integrate Power BI with Splunk and view Power BI logs in Splunk for analysis. Can someone explain how to integrate Power BI with Splunk and how to get logs from Power BI into Splunk?  
Hi, I'm trying to plot some data, over one chart for 2 different months not consecutive. i.e January and August, looking to the below post https://www.splunk.com/en_us/blog/tips-and-tricks/two-tim... See more...
Hi, I'm trying to plot some data, over one chart for 2 different months not consecutive. i.e January and August, looking to the below post https://www.splunk.com/en_us/blog/tips-and-tricks/two-time-series-one-chart-and-one-search.html trying to calculate median and plot just those 2 months in a single month timeframe the below would work for consecutive months but can not figure out how to eval my time for random months, if I add to my info_min_time then my marker is ploted over several months.             earliest="1/1/2024:00:00:00" | bin span=1h _time | addinfo | eval marker = if(_time < info_min_time + 60*24*3600, "January","Febuary")| eval _time = if(_time < info_min_time + 60*24*3600, _time + 60*24*3600, _time) | chart count max(data) by _time marker          
Have you tried it this way: | tstats count where index=_internal OR index=* [ search index=db_cloud sourcetype="azure:compute:vm:instanceView" | stats count by host | table host ] NOT [search index=... See more...
Have you tried it this way: | tstats count where index=_internal OR index=* [ search index=db_cloud sourcetype="azure:compute:vm:instanceView" | stats count by host | table host ] NOT [search index="_internal" source="*metrics.log*" group=tcpin_connections | stats count by hostname | rename hostname as host | table host] BY host
Thanks! I tried different ways but am unable to get this, if I want to add a line to check if the device is an azure VM how would I do this? | tstats count where index=_internal OR index=* N... See more...
Thanks! I tried different ways but am unable to get this, if I want to add a line to check if the device is an azure VM how would I do this? | tstats count where index=_internal OR index=* NOT [search index="_internal" source="*metrics.log*" group=tcpin_connections | stats count by hostname | rename hostname as host | table host] BY host AND [ search index=db_cloud sourcetype="azure:compute:vm:instanceView" | rename host as host_changed | table host_changed] BY host I tried this but it does not work  
Hi All Has anyone managed to solve this issue without reinstalling UF? We have this problem only on certain Window Servers 2022. Other windows versions are not affected. Also not all Win2022 are af... See more...
Hi All Has anyone managed to solve this issue without reinstalling UF? We have this problem only on certain Window Servers 2022. Other windows versions are not affected. Also not all Win2022 are affected, only certain machines Command "Get-counter -ListSet *" returns the following error. Could not find any performance counter sets on the computer: error c0000bc8. Verify that the computer exists, that it is discoverable, and that you have sufficient privileges to view performance counter data on that computer Perfmon counters are available for other users on this machine, so there is problem for SplunkForwarder user.  I've used the "lodctr /R" command but issue still persists. The issue occurred immediately after the upgrade to version 9.1.5, so it's definitely Splunk problem
Can you do the same but scroll the view to the right to show the fields beginning with "p"