All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Optimise at the END of your dashboard rather than at the start. This is not a good use of a base search - so I would first work out all your searches then MAYBE you can use a base search to optimise... See more...
Optimise at the END of your dashboard rather than at the start. This is not a good use of a base search - so I would first work out all your searches then MAYBE you can use a base search to optimise away when you are happy with the searches. You will consume more memory and things will be slower because all post processing will occur on the search head rather than possible on the distributed indexers. If you can give an example of the other searches, then there may be an optimisation, but start simple. An example of where a base search may be suitable could be type="request" "request.path"="prod/" | stats count by auth.account_namespace request.path and then you might have 2 post processing searches that do | stats sum(count) as count by auth.account_namespace | sort - count | head 10 | transpose 0 header_field=auth.account_namespace column_name=account_namespace | eval account_namespace="" and | stats sum(count) as count by request.path ... so you are using the base search to take stats across 2 dimensions and then each of the post processing search is calculating from those existing aggregations. See this article on post processing/base searches.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Viz/Savedsearches#Post-process_searches_2  
Are you saying it doesn't work? This works fine | makeresults | fields - _time | eval network="2a02:4780:10::/44" | outputlookup ipv6.csv and | makeresults | fields - _time | eval ipv6="2a02:4780... See more...
Are you saying it doesn't work? This works fine | makeresults | fields - _time | eval network="2a02:4780:10::/44" | outputlookup ipv6.csv and | makeresults | fields - _time | eval ipv6="2a02:4780:10:5be5::1" | lookup ipv6 network as ipv6 OUTPUT network as v6IP where the match type is defined as CIDR(network)
Thanks @bowesmana for your comment. Very new to Splunk and not really sure if I do need base search, but all I want really is for these events to only search once for my dashboard to not consume a lo... See more...
Thanks @bowesmana for your comment. Very new to Splunk and not really sure if I do need base search, but all I want really is for these events to only search once for my dashboard to not consume a lot of memory when it is loading or refreshing.  At the moment I do have 5 chart on my dashboard and I need to get the data from that event with different path to search. All the event are from that query and what is happening now is trying to do a query of that for 5x. I though base search would be the best thing to use so it will only query once on my dashboard. 
My understanding is that IPv6 IS supported, but I do recall I may have had some issues with CIDR on IPv6. Can you test | makeresults | eval ipv6="2a02:4780:10:5be5::1" | search ipv6="2a02:4780:10::/... See more...
My understanding is that IPv6 IS supported, but I do recall I may have had some issues with CIDR on IPv6. Can you test | makeresults | eval ipv6="2a02:4780:10:5be5::1" | search ipv6="2a02:4780:10::/44" because search definitely should support CIDR in ipv6
In your API column example, how are you assigning the tokApi token to the API column? I assume you are doing something like | makeresults | eval API=$tokApi|s$ ... At least that is what you _shoul... See more...
In your API column example, how are you assigning the tokApi token to the API column? I assume you are doing something like | makeresults | eval API=$tokApi|s$ ... At least that is what you _should_ be doing... 
You are using base searches wrongly. Firstly you should be using a transforming command in your base search, not just loading events - that is not what base searches are intended for and can often ma... See more...
You are using base searches wrongly. Firstly you should be using a transforming command in your base search, not just loading events - that is not what base searches are intended for and can often make your dashboard perform badly. If you really need to have events then you need to include a | fields statement with the fields you want, but remember, base searches are limited and this is definitely NOT a good way to use a base search. You should really put your stats command as part of the base search, but that will of course depend on what else you want to use the search for.  
Removing databases in fishbucket restored order in my ingestions.  Thank you! For the record, I preserved fishbucket/db/ which does not contain any BTree. (Had I known this, I could have done this w... See more...
Removing databases in fishbucket restored order in my ingestions.  Thank you! For the record, I preserved fishbucket/db/ which does not contain any BTree. (Had I known this, I could have done this while cleaning up disk corruption.  There should be no assumption that fishbucket escaped corruption.)
Hello,  I have some issues where using base search is not working on my dashboard. Interestingly, if I click on the search icon, it will come up with valid search query and it will shows some result... See more...
Hello,  I have some issues where using base search is not working on my dashboard. Interestingly, if I click on the search icon, it will come up with valid search query and it will shows some result. However, on my dashboard itselt it shows "no results found". Below is currently what I have set: <search id="prod_request"> <query>type="request" "request.path"="prod/"</query> <earliest>$timerange.earliest$</earliest> <latest>$timerange.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>10m</refresh> <refreshType>delay</refreshType> </search> <chart> <title>Top 10 request</title> <search base="prod_request"> <query>| stats count by auth.account_namespace | sort - count | head 10 | transpose 0 header_field=auth.account_namespace column_name=account_namespace | eval account_namespace=""</query> </search> <option name="charting.axisTitleX.text">Account Namespace</option> <option name="charting.chart">bar</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">all</option> <option name="charting.legend.placement">right</option> <option name="charting.seriesColors">[0x1e93c6, 0xf2b827, 0xd6563c, 0x6a5c9e, 0x31a35f, 0xed8440, 0x3863a0, 0xa2cc3e, 0xcc5068, 0x73427f]</option> <option name="refresh.display">progressbar</option> </chart>
Are you using Classic or Studio? Please share significant part of your dashboard source in a code block to make reading easier
Hi, does anyone know if it's possible to replace the hard coded javaHome key in db_connect's dbx_settings.conf file with the java_home environment variable in Windows?   I have an auto patching set-u... See more...
Hi, does anyone know if it's possible to replace the hard coded javaHome key in db_connect's dbx_settings.conf file with the java_home environment variable in Windows?   I have an auto patching set-up in a Windows Splunk heavy forwarder, and every time Java gets upgraded, it crashes the Splunk service.   For example, I'd like to replace this: javaHome = C:\Program Files\java\jdk-17.0.11.9-hotspot With this: javaHome=%java_home%   The above syntax doesn't work, although I'm not sure if it's a syntax or functionality issue. Thanks!
Starting with a field: domain, you can do the following: | eval domain_reversed=split(domain,".") | eval domain_reversed=mvreverse(domain_reversed) | eval domain_reversed=mvjoin(domain_reversed,".... See more...
Starting with a field: domain, you can do the following: | eval domain_reversed=split(domain,".") | eval domain_reversed=mvreverse(domain_reversed) | eval domain_reversed=mvjoin(domain_reversed,".") | sort domain_reversed This splits the field into a multivalued field called domain_reversed with the values separated by the dot, then reverses the order of the resulting multivalued field, then joins the results back together and sorts on the resulting field.  
Name2 is giving me the 1988, not sure if its converting.  
2024-06-30 should be name2 and good should be value name name2 value value2
Hello, We have created lookup definitions that use CIDR matching for IPV4 ips and is working as expected.  We are running into issues with IPV6. We are trying to create a lookup definition that doe... See more...
Hello, We have created lookup definitions that use CIDR matching for IPV4 ips and is working as expected.  We are running into issues with IPV6. We are trying to create a lookup definition that does a CIDR lookup on a IPV6 IP.  The lookup file uses CIDR notation.  One example from the file is: 2a02:4780:10::/44 The IP that should match is: 2a02:4780:10:5be5::1 The lookup definition is: CIDR(network)   Are IPV6 CIDR lookups supported?  If not, how can we do the lookup definition to satisfy the requrement? 
I have a customer asking why we have a link describing the new "features" for the version 4.0.3 if this version has never been released, we went from version 4.0.2 to 4.0.4. See the attached file.
Drill down with transpose not working as expected to fetch the row and colomn values, as its not giving me the accurate results, not sure if this is related to transpose. index=wso2 source="/opt/lo... See more...
Drill down with transpose not working as expected to fetch the row and colomn values, as its not giving me the accurate results, not sure if this is related to transpose. index=wso2 source="/opt/log.txt" "Count_Reportings" | fields api-rep rsp_time mguuid | bin _time span=1d | stats values(*) as * by _time, mguuid | eval onesec=if(rsp_time<=1000,1,0) | eval threesec=if(rsp_time>1000 and rsp_time<=3000,1,0) | eval threesecGT=if(rsp_time>3000,1,0) | eval Total = onesec + threesec + threesecGT | stats sum(onesec) as sumonesec sum(threesec) as sumthreesec sum(threesecGT) as sumthreesecGT sum(Total) as sumtotal by api-rep, _time | eval good = if(api-rep="High", sumonesec + sumthreesec, if(api-rep="Medium", sumonesec + sumthreesec, if(api-rep="Low", sumonesec, null()))) | eval per_call=if(api-rep="High", (good / sumtotal) * 100, if(api-rep="Medium" , (good / sumtotal) * 100, if(api-rep="Low" , (good / sumtotal) * 100, null()))) | eval per_cal=round(per_call,2) | timechart span=1d avg(per_cal) by api-rep | eval time=strftime(_time, "%Y-%m-%d") | fields - _time _span _spandays | fillnull value=0 | transpose 0 header_field=time column_name=APIs include_empty=true Below is the output for the above query, when i click on the 99.93 then need to pick GOOD and colomn header 2024-06-30 and pass it in the drilldown query When i click on 99.93 from colomn 2024-06-30 it gives me below output, its not giving me the row values as Good. Below are the drildown tokens. tokClickValue1 = $click.value$ tokClickName1 = $click.name$ tokClickValue2 = $click.value2$ tokClickName2 = $click.name2$ tokApi = $row.APIs$ i want token to fetch header and APIs values to pass it to drilldown query. 
If there are no errors in the Splunk logs relating to sending email then there must be something happening to the messages after they leave Splunk.  Check your Spam folder and any automatic actions y... See more...
If there are no errors in the Splunk logs relating to sending email then there must be something happening to the messages after they leave Splunk.  Check your Spam folder and any automatic actions you may have. Have you confirmed the alerts are firing?
Just upgraded to 9.2.2 on our heavy forwarder and had the same KV store errors. Our mongod.log displayed the same ssl errors. These steps worked perfectly! 
Thank you for posting your solution.  This was our problem after migration to RHEL9 and your solution fixed it.
I was hoping to get some help, in modifying the query above. I got an Index and a source type for my windows environment. I would like to see the following:  - Authentication PackagesName  = This lo... See more...
I was hoping to get some help, in modifying the query above. I got an Index and a source type for my windows environment. I would like to see the following:  - Authentication PackagesName  = This looks to shows the type of Authentication taking place like NTLM, Kerberos, MFA, etc.... I need this to show for each user  (Windows Authentication Technical Overview | Microsoft Learn) - Logon Type = used by Windows to shows successful login and failers logs like (4624, 4625, 4648) and should have a count related to the above attribute  (Windows Logon Scenarios | Microsoft Learn) - LogonProcessName = The process name for the authentication action taking place for the user  PS. The idea here it sees what Authentication action is taking place for each user so I can say yea there are using NTLM or Kerberos to access this host or resource. Thanks again Community!!!!