All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You've shared the splunk enterprise manual to set up scripted authentication extensions with okta with us. Configure authentication extensions to interface with your SAML identity provider - Splunk ... See more...
You've shared the splunk enterprise manual to set up scripted authentication extensions with okta with us. Configure authentication extensions to interface with your SAML identity provider - Splunk Documentation So that should be fine if you proceed with this manual. Regarding the permissiona check the python script and the endpoints that are used in the script. Probably based on the endpoints you could figure out with your IAM colleagues which capabilities are needed.
Hi @Real_captain , try adding keeporphans =true option to the transaction command (as you can see at https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Transaction), it shoul... See more...
Hi @Real_captain , try adding keeporphans =true option to the transaction command (as you can see at https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Transaction), it should run, index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY NIDF otherwise use only startswith option and not also endswith option. Ciao. Giuseppe
You should check for SSL issues in your internal splunk log (%SPLUNK_HOME%/var/log/splunk/splunkd.log) Search for the keywords SSL, the name of your private key and the name of your certificate You... See more...
You should check for SSL issues in your internal splunk log (%SPLUNK_HOME%/var/log/splunk/splunkd.log) Search for the keywords SSL, the name of your private key and the name of your certificate Your screenshot shows a different location of certificates as the locations that are configured in your web.conf for private key and certificate.  
The force_local_processing setting in props.conf will have the UF do some parsing.  See props.conf.spec for details.
Hi @deepakc, following output of required checks: Check that your serverclass is taking the current config (might be some config that’s  overriding, its normally in /opt/splunk/etc/system/local/serv... See more...
Hi @deepakc, following output of required checks: Check that your serverclass is taking the current config (might be some config that’s  overriding, its normally in /opt/splunk/etc/system/local/serverclass and sometimes in a dedicated app /opt/splunk/bin/splunk btool serverclass list --debug - Done: the only 2 serverclass.conf files are the ones under $SPLUNK_HOME$/etc/system/default and $SPLUNK_HOME$/etc/system/local Check the Permissions on the HF's /opt/splunk/etc/apps/  (sudo chown -R splunk:splunk /opt/splunk/etc/apps - this is typical) - Done, folder ownership is fine Restart the HF / Deployment Server - Done Can you verify the ownership of the apps on the Deployment Server (Typically they should be splunk:splunk sudo chown -R splunk:splunk /opt/splunk/etc/deployment_apps) - Done, ownership if fine Can you verify the firewall ports are all OK 8089 (HF to DS - port 8089) - Done, HFs can reach DS on 8089 and vice versa Can you double check the apps names in serverclass.conf (I have seen app name typo's errors in the past)  - Done, app folder name and app name in serveclass.conf are the same
Well.... I appreciate you helping me confirm it's just 2022
A few things to check - (I know you have done some already)   Check that your serverclass is taking the current config (might be some config that’s  overriding, its normally in /opt/splunk/etc/syst... See more...
A few things to check - (I know you have done some already)   Check that your serverclass is taking the current config (might be some config that’s  overriding, its normally in /opt/splunk/etc/system/local/serverclass and sometimes in a dedicated app /opt/splunk/bin/splunk btool serverclass list --debug Check the Permissions on the HF's /opt/splunk/etc/apps/  (sudo chown -R splunk:splunk /opt/splunk/etc/apps - this is typical) Restart the HF / Deployment Server Can you verify the ownership of the apps on the Deployment Server (Typically they should be splunk:splunk sudo chown -R splunk:splunk /opt/splunk/etc/deployment_apps) Can you verify the firewall ports are all OK 8089 (HF to DS - port 8089) Can you double check the apps names in serverclass.conf (I have seen app name typo's errors in the past)  
Two years have passed  since this topic. Is there any news on this?
Thanks gcusello.  This solution really works when we have to extract the data of previous days.  Is it possible to have the stats of the current date when the startswith="IDJO20P" arrived but e... See more...
Thanks gcusello.  This solution really works when we have to extract the data of previous days.  Is it possible to have the stats of the current date when the startswith="IDJO20P" arrived but endswith="PIDZJEA" is still not received ???   
the universal forwarder does not parse data except in certain limited situations. can anyone tells what are these situations?
After tooling with it more, I think the best approach uses the map command. | makeresults count=2 | streamstats count | eval index = case(count=1, "myindex1", count=2, "myindex2") | outputlookup loo... See more...
After tooling with it more, I think the best approach uses the map command. | makeresults count=2 | streamstats count | eval index = case(count=1, "myindex1", count=2, "myindex2") | outputlookup lookup_of_events | stats count by index | map report_to_map_through_indexes report_to_map_through_indexes | inputlookup lookup_of_events where index="$index$" | collect index="$index$"
Seems that the icon functionality doesn't care for files in $APP/appserver/static path, only pulls up file data from the kvstore, forcing you to somehow transfer this kvstore collection to your SHClu... See more...
Seems that the icon functionality doesn't care for files in $APP/appserver/static path, only pulls up file data from the kvstore, forcing you to somehow transfer this kvstore collection to your SHCluster every time you deploy something. Not cool. Easier to just convert all icons to images by hand, once. We used icons because we thought images don't have the "hideWhenNoData" option. Turns out they do have it, but docs are not too clear. https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/DashStudio/chartsImage https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/DashStudio/showHide
Could you support me, what would this research look like?
Hi Splunkers, I'm deploying a new Splunk Enterprise environment; inside it, I have (for now) 2 HF and a DS. I'm trying to set an outputs.conf file on both HF via DS; clients perform a correct phonin... See more...
Hi Splunkers, I'm deploying a new Splunk Enterprise environment; inside it, I have (for now) 2 HF and a DS. I'm trying to set an outputs.conf file on both HF via DS; clients perform a correct phoning to DS, but then apps are not downloaded. I checked the internal logs and I got no error related to app. I followed doc and course material used during Architect course for references. Below, configuration I made on DS. App name:      /opt/splunk/etc/deployment-apps/hf_seu_outputs/       App file     /opt/splunk/etc/deployment-apps/hf_seu_outputs/default/app.conf [ui] is_visible = 0 [package] id = hf_outputs check_for_updates = 0       /opt/splunk/etc/deployment-apps/hf_seu_outputs/local/outputs.conf [indexAndForward] index=false [tcpout] defaultGroup = default-autolb-group forwardedindex.filter.disable = true indexAndForward = false [tcpout:default-autolb-group] server=<idx1_ip_address>:9997, <idx2_ip_address>:9997, <idx3_ip_address>:9997     serverclass.conf:   [serverClass:spoke_hf:app:hf_seu_outputs] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:spoke_hf] whitelist.0 = <HF1_ip_address>, <HF1_ip_address>   File and folder permission are right, owner is the user used to execute Splunk (in a nutshell, the owner of /opt/spluk). I suppose it is a very stupid issue, but I'm not able to figured it out.
Hi @fde , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @IlianYotov, do new files have the same name of the previous or a different one? did you checked without the "crcSalt = <SOUCE>" option? Is it possible that the new files have the same content ... See more...
Hi @IlianYotov, do new files have the same name of the previous or a different one? did you checked without the "crcSalt = <SOUCE>" option? Is it possible that the new files have the same content of the previous ones? Ciao. Giuseppe
Would of been nice for Splunk support to mention this.  I've had to move on and decommission Server 2022. Installed 2019 like you suggested and everything is working as it should.  Thanks again.
Hi @Real_captain , please try something like this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | transac... See more...
Hi @Real_captain , please try something like this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | transaction startswith="IDJO20P" endswith="PIDZJEA" | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY NIDF Ciao. Giuseppe
Hi, My problem is solved. With support guidance, I change IP address used for replication. Here is the steps used to: 1°) stop indexer 2°) change etc/system/local/server.conf and  add register_r... See more...
Hi, My problem is solved. With support guidance, I change IP address used for replication. Here is the steps used to: 1°) stop indexer 2°) change etc/system/local/server.conf and  add register_replication_address parameter with the new IP 3°) rename etc/instance.conf to etc/instance.conf.bkp 4°) restart indexer CM =  Index clustering after few seconds => OK MC reconfigured the Indexer => OK CM and SH  =  Distributed Search =>  2 entry for the same instance name with 2 different Peer URI, the 2 entry appeared down and Sick 5°) I deleted one entry, the one with the old Peer URI => After 5min and a refresh, no more entry Down and Sick. And now, all seems to be ok. IP address for peer URI is now showing the new IP address.  
Hi @avi7326 , as I said, there's no sense to put in the same panel a result from a stats search and a table . use your searches in two different panels. Ciao. Giuseppe