All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ehh... didn't notice the value was enclosed in quotes. Quotes are major breakers, TERM won't work then.
Hi @schose  A full Splunk Enterprise installation is not currently supported for MacOS (See https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/10.0/plan-your-splunk-enterpr... See more...
Hi @schose  A full Splunk Enterprise installation is not currently supported for MacOS (See https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/10.0/plan-your-splunk-enterprise-installation/system-requirements-for-use-of-splunk-enterprise-on-premises) - Only the UF package is supported, this could be for a number of reasons however MongoDB 5.0+ requires a CPU with AVX support which Silicon macs do not support (with Intel macs aging out).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @addOnGuy  Try hitting the bump endpoint to clear the internal web cache. https://yourSplunkInstance/en-US/_bump Then click the "Bump Version" button:  Did this answer help you? If so, p... See more...
Hi @addOnGuy  Try hitting the bump endpoint to clear the internal web cache. https://yourSplunkInstance/en-US/_bump Then click the "Bump Version" button:  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
But when I am trying to use TERM for service field, values are not returning. Service field is still there in my raw summary event. Not sure what went wrong (index=prod) OR (index=opco_summary AND (... See more...
But when I am trying to use TERM for service field, values are not returning. Service field is still there in my raw summary event. Not sure what went wrong (index=prod) OR (index=opco_summary AND (TERM(service=JUNIPER-PROD))   Even checked only with summary index and term with service not working This is my raw data for summary index -- I have extracted service from original index and given |eval service = service and then collected in summary index... 07/31/2025 04:59:56 +0000, search_name="decode query", search now 1753938000.000, info min_time=1753937100.000, info_max_time=1753938000. info_search_time=1753938000.515, uri="/wasinfkeepalive.jsp", fqdn-"p3bmm-eu.systems.uk.many-44", service="JUNIPER-PROD", vs_name="tenant/juniper/services/jsp" XXXXXX  
Thank you livehybrid Good to know.   Regards, Harry
@dwong-rtr  Splunk Cloud restricts customization of the “From” address for triggered alert emails. The default sender (alerts@splunkcloud.com) is hardcoded and cannot be changed via the UI or config... See more...
@dwong-rtr  Splunk Cloud restricts customization of the “From” address for triggered alert emails. The default sender (alerts@splunkcloud.com) is hardcoded and cannot be changed via the UI or configuration files. But you can consider an option to set up an internal SMTP relay that receives emails from Splunk Cloud and re-sends them using your internal service address. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@addOnGuy  -Try clearing your browser cache or using an incognito window -Check both default and local directories inside your add-on - Old parameters might be lingering in local. -Restart Splunk ... See more...
@addOnGuy  -Try clearing your browser cache or using an incognito window -Check both default and local directories inside your add-on - Old parameters might be lingering in local. -Restart Splunk If all else fails, try exporting the current version and re-importing it into Add-on Builder as a fresh projectand try. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@arvind_Sugajeev I also got the `You do not have permission to share objects at the system level` response when providing only `owner`. I resolved it including `owner`, `share`, and `perms.read`.
This worked for me, thank you very much
Kindly repeat the step again  "select the forwarders" then when it comes to selecting the server class dont create a new one just select "existing"  and select the previous one you created and the "l... See more...
Kindly repeat the step again  "select the forwarders" then when it comes to selecting the server class dont create a new one just select "existing"  and select the previous one you created and the "local events logs"  will appear. 
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to someh... See more...
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to somehow show the parameters I used for a prior version. I've checked the global config json file and everywhere else I could think of and they all reflect the parameters for the new version. Despite that, the UI still shows the old parameters. Does anyone have any idea why or where else I could check?
Thanks for replying but no encryption. I used the modular frame of the python script that it gave me as a template.
Are you saying if you run that second search in a different app context, the behaviour is different. Note that your SPL logic to do stats earliest(_time) as min_time will not tell you the actual sea... See more...
Are you saying if you run that second search in a different app context, the behaviour is different. Note that your SPL logic to do stats earliest(_time) as min_time will not tell you the actual search range, just the time of the earliest event it found. Try the SPL  ... | stats min(_time) as min_time max(_time) as max_time by index | convert ctime(min_time) ctime(max_time) | addinfo The addinfo command will show you the actual search range used by the search irrespective of any events found.
Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/... See more...
Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/kvstore" -> Currently used KVSTore version=4.2.22. Expected version=4.2 or version=7.0 CPU Vendor: GenuineIntel CPU Family: 6 CPU Model: 44 CPU Brand: \x AVX Support: No SSE4.2 Support: Yes AES-NI Support: Yes   There seems to be an issue with determine AVX correctly thru rosetta?! - Anyway, i tried to upgrade on v9.4.1using ~/splunk/bin/splunk start-standalone-upgrade kvstore -version 7.0 -dryRun true and receive the error In handler 'kvstoreupgrade': Missing Mongod Binaries :: /Users/andreas/splunk/bin/mongod-7.0; /Users/andreas/splunk/bin/mongod-6.0; /Users/andreas/splunk/bin/mongod-5.0; /Users/andreas/splunk/bin/mongod-4.4; Please make sure they are present under :: /Users/andreas/splunk/bin before proceeding with upgrade. Upgrade Path = /Users/andreas/splunk/bin/mongod_upgrade not found Please make sure upgrade tool binary exists under /Users/andreas/splunk/bin The error that mongod-4.4, mongod-5.0, mongod-6.0 and mongod-7.0 are missing is correct, the files are not there. There are not in delivered splunk .tgz for macos. The linux tarball includes them..  any hints? best regards, Andreas
THATS what I forgot to mention. They are pushing everything to a HF on their side, that is pushing to my HF's. I will try that out. Thanks!
Hi @jessieb_83  Are they sending from a UF or a HF? If you arent having much look with this props/transform combo then it sounds like the data might be arriving to you already parsed, thus what you'... See more...
Hi @jessieb_83  Are they sending from a UF or a HF? If you arent having much look with this props/transform combo then it sounds like the data might be arriving to you already parsed, thus what you're doing here wont have an affect. If the data has already been through a HF then you could try this instead: # props.conf [linux_messages_syslog] RULESET-dropSyslog = dropLog # transforms [dropLog] INGEST_EVAL = queue="nullQueue" If you are sure its coming from a UF then you could try setting "REGEX = ." on your existing config, however I think what you had should have worked.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Too follow up.  Using  [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH This fails under the following conditions: If you have an old `Splunkd.service` file, with a line using =!, like the fo... See more...
Too follow up.  Using  [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH This fails under the following conditions: If you have an old `Splunkd.service` file, with a line using =!, like the following: ExecStart=!/opt/splunk/bin/splunk _internal_launch_under_systemd If so, you will need to recreate the Splunkd.service file. If you utilize the "Data inputs --> Files & directories" monitor method for ingest the /var/log/audit/audit,log files this fails. This works with a current Splunk version (mine is 9.3.5) created Splunkd.service file and using the Splunk_TA_nix script method of ingest using rlog.sh. Kuddos to @livehybrid for causing me to review and realize I had an out of date Splunkd.service file  
I'm at a loss and hoping for an assist.  Running a distributed Splunk instance, I used the Deployment Server to push props.conf and transforms.conf to my heavy forwarders to drop specific external c... See more...
I'm at a loss and hoping for an assist.  Running a distributed Splunk instance, I used the Deployment Server to push props.conf and transforms.conf to my heavy forwarders to drop specific external customer logs logs at the HF.    We're receiving logs from several external customers, each with their own index. I'm in progress of dividing each customer into sub indexes like {customer}-network, {customer}-authentication and {customer}-syslog Yes, I'm trying to dump all Linux syslog. This a temporary move while their syslog are flooding millions of errors before we're able to finish moving them to their new {customer}-syslog index. I did inform them and they're working it, with no ETA. I've been over a dozen posts on the boards, I've asked two different AI's how to do this backwards and forwards, I've triple checked spelling, placement & permissions. I tried pushing the configs to the indexers from the cluster manager and that didn't work either. I created copies of the configs in ~/etc/system/local/ and no dice.  I've done similar in my lab with success.  I verified the customer inputs.conf is declaring the sourcetype as linux_messages_syslog I'm at a total loss of why this isn't working.   props.conf: [linux_messages_syslog] TRANSFORMS-dropLog = dropLog transforms.conf: [dropLog] REGEX = (?s)^.*$ DEST_KEY = queue FORMAT = nullQueue   Anyone have any idea what got'cha I'm getting got by?
Hey @harryvdtol  Some good news -  "Trellis layout support has been expanded to more visualizations. Now, in addition to single value visualizations, you can apply trellis layout to area, line, bar... See more...
Hey @harryvdtol  Some good news -  "Trellis layout support has been expanded to more visualizations. Now, in addition to single value visualizations, you can apply trellis layout to area, line, bar, and column charts." in Splunk Enterprise10.0 - Check out this blog for more info.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @spamarea1  Do you have any encrypted fields in the input configuration? It might be that these arent copied when an input is cloned - this might explain why you are getting a 401 error from your... See more...
Hi @spamarea1  Do you have any encrypted fields in the input configuration? It might be that these arent copied when an input is cloned - this might explain why you are getting a 401 error from your API if its missing some credentials/password etc. If you've clone it, try updating any encrypted value - if appropriate.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing