All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Why can't I give srchFilter as the role index by default? What will be the drawback for this? For prod roles I need to mention summary index condition as well with their restricted service... How @sp... See more...
Why can't I give srchFilter as the role index by default? What will be the drawback for this? For prod roles I need to mention summary index condition as well with their restricted service... How @splunklearner told.. exactly the same way.. Below is the role created for non-prod   [role_abc]   srchIndexesAllowed = non_prod   srchIndexesDefault = non_prod   srchFilter = (index=non_prod)   Below is the role created for prod   [role_xyz]   srchIndexesAllowed = prod;opco_summary   srchIndexesDefault = prod   srchFilter = (index=prod) OR (index=opco_summary AND (service=juniper-prod OR service=juniper-cont)))
No, I don't think you can modify the behaviour of the collect command and hence the output event format (much). And yes, you'd have to do some magic like index!=your_specific_index OR (index=your_sp... See more...
No, I don't think you can modify the behaviour of the collect command and hence the output event format (much). And yes, you'd have to do some magic like index!=your_specific_index OR (index=your_specific_index AND <additional conditions...) Ugly, isn't it?
Is there any way that I can get rid of double quotes? And one more thing I noticed, if user has access to 10 roles (10 indexes) and we have applied srchFilter to 6 roles, then if they are searching w... See more...
Is there any way that I can get rid of double quotes? And one more thing I noticed, if user has access to 10 roles (10 indexes) and we have applied srchFilter to 6 roles, then if they are searching with other 3 indexes (which are not there in srchFilter), he is not seeing any results. It means, if I use srchFilter by default I need to include it's index name in srchFilter? srchFilter is getting added for all the roles with OR...
Ehh... didn't notice the value was enclosed in quotes. Quotes are major breakers, TERM won't work then.
Hi @schose  A full Splunk Enterprise installation is not currently supported for MacOS (See https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/10.0/plan-your-splunk-enterpr... See more...
Hi @schose  A full Splunk Enterprise installation is not currently supported for MacOS (See https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/10.0/plan-your-splunk-enterprise-installation/system-requirements-for-use-of-splunk-enterprise-on-premises) - Only the UF package is supported, this could be for a number of reasons however MongoDB 5.0+ requires a CPU with AVX support which Silicon macs do not support (with Intel macs aging out).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @addOnGuy  Try hitting the bump endpoint to clear the internal web cache. https://yourSplunkInstance/en-US/_bump Then click the "Bump Version" button:  Did this answer help you? If so, p... See more...
Hi @addOnGuy  Try hitting the bump endpoint to clear the internal web cache. https://yourSplunkInstance/en-US/_bump Then click the "Bump Version" button:  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
But when I am trying to use TERM for service field, values are not returning. Service field is still there in my raw summary event. Not sure what went wrong (index=prod) OR (index=opco_summary AND (... See more...
But when I am trying to use TERM for service field, values are not returning. Service field is still there in my raw summary event. Not sure what went wrong (index=prod) OR (index=opco_summary AND (TERM(service=JUNIPER-PROD))   Even checked only with summary index and term with service not working This is my raw data for summary index -- I have extracted service from original index and given |eval service = service and then collected in summary index... 07/31/2025 04:59:56 +0000, search_name="decode query", search now 1753938000.000, info min_time=1753937100.000, info_max_time=1753938000. info_search_time=1753938000.515, uri="/wasinfkeepalive.jsp", fqdn-"p3bmm-eu.systems.uk.many-44", service="JUNIPER-PROD", vs_name="tenant/juniper/services/jsp" XXXXXX  
Thank you livehybrid Good to know.   Regards, Harry
@dwong-rtr  Splunk Cloud restricts customization of the “From” address for triggered alert emails. The default sender (alerts@splunkcloud.com) is hardcoded and cannot be changed via the UI or config... See more...
@dwong-rtr  Splunk Cloud restricts customization of the “From” address for triggered alert emails. The default sender (alerts@splunkcloud.com) is hardcoded and cannot be changed via the UI or configuration files. But you can consider an option to set up an internal SMTP relay that receives emails from Splunk Cloud and re-sends them using your internal service address. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@addOnGuy  -Try clearing your browser cache or using an incognito window -Check both default and local directories inside your add-on - Old parameters might be lingering in local. -Restart Splunk ... See more...
@addOnGuy  -Try clearing your browser cache or using an incognito window -Check both default and local directories inside your add-on - Old parameters might be lingering in local. -Restart Splunk If all else fails, try exporting the current version and re-importing it into Add-on Builder as a fresh projectand try. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@arvind_Sugajeev I also got the `You do not have permission to share objects at the system level` response when providing only `owner`. I resolved it including `owner`, `share`, and `perms.read`.
This worked for me, thank you very much
Kindly repeat the step again  "select the forwarders" then when it comes to selecting the server class dont create a new one just select "existing"  and select the previous one you created and the "l... See more...
Kindly repeat the step again  "select the forwarders" then when it comes to selecting the server class dont create a new one just select "existing"  and select the previous one you created and the "local events logs"  will appear. 
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to someh... See more...
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to somehow show the parameters I used for a prior version. I've checked the global config json file and everywhere else I could think of and they all reflect the parameters for the new version. Despite that, the UI still shows the old parameters. Does anyone have any idea why or where else I could check?
Thanks for replying but no encryption. I used the modular frame of the python script that it gave me as a template.
Are you saying if you run that second search in a different app context, the behaviour is different. Note that your SPL logic to do stats earliest(_time) as min_time will not tell you the actual sea... See more...
Are you saying if you run that second search in a different app context, the behaviour is different. Note that your SPL logic to do stats earliest(_time) as min_time will not tell you the actual search range, just the time of the earliest event it found. Try the SPL  ... | stats min(_time) as min_time max(_time) as max_time by index | convert ctime(min_time) ctime(max_time) | addinfo The addinfo command will show you the actual search range used by the search irrespective of any events found.
Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/... See more...
Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/kvstore" -> Currently used KVSTore version=4.2.22. Expected version=4.2 or version=7.0 CPU Vendor: GenuineIntel CPU Family: 6 CPU Model: 44 CPU Brand: \x AVX Support: No SSE4.2 Support: Yes AES-NI Support: Yes   There seems to be an issue with determine AVX correctly thru rosetta?! - Anyway, i tried to upgrade on v9.4.1using ~/splunk/bin/splunk start-standalone-upgrade kvstore -version 7.0 -dryRun true and receive the error In handler 'kvstoreupgrade': Missing Mongod Binaries :: /Users/andreas/splunk/bin/mongod-7.0; /Users/andreas/splunk/bin/mongod-6.0; /Users/andreas/splunk/bin/mongod-5.0; /Users/andreas/splunk/bin/mongod-4.4; Please make sure they are present under :: /Users/andreas/splunk/bin before proceeding with upgrade. Upgrade Path = /Users/andreas/splunk/bin/mongod_upgrade not found Please make sure upgrade tool binary exists under /Users/andreas/splunk/bin The error that mongod-4.4, mongod-5.0, mongod-6.0 and mongod-7.0 are missing is correct, the files are not there. There are not in delivered splunk .tgz for macos. The linux tarball includes them..  any hints? best regards, Andreas
THATS what I forgot to mention. They are pushing everything to a HF on their side, that is pushing to my HF's. I will try that out. Thanks!
Hi @jessieb_83  Are they sending from a UF or a HF? If you arent having much look with this props/transform combo then it sounds like the data might be arriving to you already parsed, thus what you'... See more...
Hi @jessieb_83  Are they sending from a UF or a HF? If you arent having much look with this props/transform combo then it sounds like the data might be arriving to you already parsed, thus what you're doing here wont have an affect. If the data has already been through a HF then you could try this instead: # props.conf [linux_messages_syslog] RULESET-dropSyslog = dropLog # transforms [dropLog] INGEST_EVAL = queue="nullQueue" If you are sure its coming from a UF then you could try setting "REGEX = ." on your existing config, however I think what you had should have worked.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Too follow up.  Using  [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH This fails under the following conditions: If you have an old `Splunkd.service` file, with a line using =!, like the fo... See more...
Too follow up.  Using  [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH This fails under the following conditions: If you have an old `Splunkd.service` file, with a line using =!, like the following: ExecStart=!/opt/splunk/bin/splunk _internal_launch_under_systemd If so, you will need to recreate the Splunkd.service file. If you utilize the "Data inputs --> Files & directories" monitor method for ingest the /var/log/audit/audit,log files this fails. This works with a current Splunk version (mine is 9.3.5) created Splunkd.service file and using the Splunk_TA_nix script method of ingest using rlog.sh. Kuddos to @livehybrid for causing me to review and realize I had an out of date Splunkd.service file