All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Howto to explode 1 row to several breaking out a multi-value field. app=ABC client=AA views=View1,View2 app=ABC client=AA views=View1,View2,View3 app=ABC client=BB views=View1,View3 app=ABC clien... See more...
Howto to explode 1 row to several breaking out a multi-value field. app=ABC client=AA views=View1,View2 app=ABC client=AA views=View1,View2,View3 app=ABC client=BB views=View1,View3 app=ABC client=CC views=View3,View2,View1 I want to table that to column data: app client view ABC AA View1 ABC AA View2 ABC AA View1 ABC AA View2 ABC AA View3 ABC BB View1 ABC BB View3 ABC CC View3 ABC CC View2 ABC CC View1 So that I can run count on that resultant rows app client view count ABC AA View1 2 ABC AA View2 2 ABC AA View3 1 ABC BB View1 1 ABC BB View3 1 ABC CC View3 1 ABC CC View2 1 ABC CC View1 1
Ok thanks I will update that. What needs to be in the web.conf file to enable CAC login I currently have [settings] httpport = 8000 enableSplunkWebSSL = 1 tools.sessions.timeout = 15 requireClientC... See more...
Ok thanks I will update that. What needs to be in the web.conf file to enable CAC login I currently have [settings] httpport = 8000 enableSplunkWebSSL = 1 tools.sessions.timeout = 15 requireClientCert = true enableCertBasedUserAuth = true SSOMode = permissive trustedIP = 127.0.0.1 certBasedUserAuthMethod = commonname allowSsoWithoutChangingServerConf = 1 privKeyPath = E:\SPLUNKent\etc\auth\mycerts\xx.key serverCert = E:\SPLUNKent\etc\auth\mycerts\xx.pem
The value for userNameAttribute needs to be userPrincipalName to match the value being extracted from the CAC
userNameAttribute = samaccountname
hi @Gayatri, Did the windows server available on `Clients` tab on DS? if yes can you query the internal log for that windows server? You can used this query index=_internal host=$windows-server-... See more...
hi @Gayatri, Did the windows server available on `Clients` tab on DS? if yes can you query the internal log for that windows server? You can used this query index=_internal host=$windows-server-hostname$ If the log not available need to consider restart the Splunk UF on windows server and if already restart did you enable forwarding on Splunk UF installation?
Anything above 9.2.2 will have the fix, so you should be fine with 9.3. What is the value you are using for userNameAttribute in authentication.conf?
I am also having this issue. We are on Splunk 9.3.0 So for Army it is not possible to use DoD CAC authentication with this version?
Thank you for this post.  We are experiencing a similar issue, and have opened a case.  We are now testing the suggestion made by the Tech Support engineer: Please add the following stanza to your c... See more...
Thank you for this post.  We are experiencing a similar issue, and have opened a case.  We are now testing the suggestion made by the Tech Support engineer: Please add the following stanza to your configuration to see if it resolves the issue.   %SPLUNK_HOME%\etc\apps\introspection_generator_addon\local\server.conf [introspection:generator:resource_usage] disabled = true acquireExtra_i_data = false   Would you please check/apply the workaround and let us know the output?   I will provide an update when I know more - and when I am back from leave.
We migrated our Splunk indexer from Ubuntu to RHEL recently. Everything appeared to go fine except for this one add-on. Initially, we were getting a different error. I ran fapolicyd-cli add file splu... See more...
We migrated our Splunk indexer from Ubuntu to RHEL recently. Everything appeared to go fine except for this one add-on. Initially, we were getting a different error. I ran fapolicyd-cli add file splunk to it and that error cleared but now we get this error.  External search command "ldapgroup" returned error code 1. Script output = "error message=HTTPError at "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 1245 : HTTP 403 Forbidden - insufficient permission to access this resources." I went in and did chown -R on the folder (and every other folder in the line including /opt/splunk) but that didn't fix it. The files and folders are all owned by splunk and have permission to run it. I have verified the firewall ports for 636 and 389 are open. We have tried to reinstall the add-on through the web interface and get a series of similar errors indicating that it can't copy a number of .py files over. Some do get copied though and most of the folders created. I'm at a bit of a loss...  
Hi @Manas.Dixit, Did you get around to trying it? Anything to report back on?
Hi @Dhinagar.Devapalan, Thanks for the update. Do let us know what happens after you speak with the Network team and share any learnings here, please.
Did you ever figure out what the issue was? I am having the same issue with VSCode and the API, however it did work at one point earlier this year and has now stopped working. I don't know exactly wh... See more...
Did you ever figure out what the issue was? I am having the same issue with VSCode and the API, however it did work at one point earlier this year and has now stopped working. I don't know exactly when because it has not been a regular thing to use VSCode to search our Splunk Cloud.
Hi,   I'm also facing the similar issue. We have installed 9.0.2 version of Splunk UF on windows servers and connectivity towards DS all looks good. And firewall team and NSG also confirmed rules a... See more...
Hi,   I'm also facing the similar issue. We have installed 9.0.2 version of Splunk UF on windows servers and connectivity towards DS all looks good. And firewall team and NSG also confirmed rules and routing are in place. Still, we are not able to see logs at Splunk console. We are getting error stating "existing connection forcibley closed by remote host" and "TCPout processor stopped to process the flow and blocked for seconds". Can you help us here with your inputs.
Hello,   I've seen many others in this forum trying to achieve something similar to what I'm trying to do but I didn't find an answer that completely satisfied me. This is the Use Case: I want to... See more...
Hello,   I've seen many others in this forum trying to achieve something similar to what I'm trying to do but I didn't find an answer that completely satisfied me. This is the Use Case: I want to compare the number of requests received by our Web Proxy with the same period in the last week. Then I want to filter out any increase lower than X percent.   This is how I've tried to implement it using the timewrap and it's pretty close to what I want to achieve. Only problem is that the timewrap command only seems to work fine if I only group by _time.     | tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time span=10m | timewrap 1w | where _time >= relative_time(now(), "-60m") | where (event_count_latest_week - event_count_1week_before) > 0 | where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40     This gives me a result like this. _time event_count_1week_before_week event_count_latest_week XXXX YYYY ZZZZ     If I try to do something similar but grouping by the name of the web site that it's being accesed in the tstats command then timewrap command doesn't work for me anymore. It outputs just the latest values of 1 of the web sites.       | tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m | timewrap 1w | where _time >= relative_time(now(), "-60m") | where (event_count_latest_week - event_count_1week_before) > 0 | where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40       That doesn't work. Do you know why that happens and how can I achieve what I want?     Many thanks.   Kind regards.  
Leverage your monitoring console as the easy method to check volume sizes on each of your indexers.  Ideally the total space should be absolutely identical across all indexers, ie 100MB x 8idx/site x... See more...
Leverage your monitoring console as the easy method to check volume sizes on each of your indexers.  Ideally the total space should be absolutely identical across all indexers, ie 100MB x 8idx/site x 2 sites (completely made up numbers). | rest splunk_servers=* /services/data/index-volumes Run that SPL on your search head and it will return for all servers in your search cluster and indexing cluster.  You can add more search terms to get down to the indexer level and then transform the results for "/dataHot", "/dataCold", and "_splunk_summaries".  Look at the per server results for used vs available/total and create a calculated field for %used.  Anything above 85% for /dataCold is typically a strong indication you need to expand your storage capabilities.  Note that "/dataHot" by design runs full before it will roll a bucket over to the /dataCold volume.
Even i do have the same requirement. Connect to Storage Account via private endpoint dns name rather than storage account name.    
Hi @juvenile , as you like: the same of All or one of the dinamic values. Ciao. Giuseppe
I lost track of this thread, but the issue here was resolved some time ago. More advanced in our deployment and now looking at other issues lol. 
I may have restrictions but I'm not sure what they are, as I inherited some configurations from our Splunk PS whom started building our architecture but I subsequently ended up finishing. Where shoul... See more...
I may have restrictions but I'm not sure what they are, as I inherited some configurations from our Splunk PS whom started building our architecture but I subsequently ended up finishing. Where should I look?  As for the internal logs, they are vague. It does tell me which buckets but there are a lot of them. Nothing stands out to me, but that could be my untrained eye on whether they are all hot/warm/cold.
Hi, Do anyone have idea how we can disable or hide the OpenAPI.json which is visible in the UI of Splunk Add-on which exposes the schema of the inputs ? I also deleted the OpenAPI.json file fro... See more...
Hi, Do anyone have idea how we can disable or hide the OpenAPI.json which is visible in the UI of Splunk Add-on which exposes the schema of the inputs ? I also deleted the OpenAPI.json file from appserver->static directory which helped in not downloading the file but that button is still present which I want to hide. It is referenced in some .js files which is difficult to read. Any idea how to hide the below button from the configuration tab of the UI