All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello All,  Below is my dataset from a base query. How can i calculate the average value of the column ? Incident avg_time days hrs minutes P1 1 hour, 29 minutes 0.06181041204508532 1.4... See more...
Hello All,  Below is my dataset from a base query. How can i calculate the average value of the column ? Incident avg_time days hrs minutes P1 1 hour, 29 minutes 0.06181041204508532 1.4834498890820478 29.00699334492286 P2 1 hour, 18 minutes 0.05428940107829018  1.3029456258789642 18.176737552737862 I need to display the average of the avg_time values. Since there is date/time involved, merely doing the below function is not working stats avg(avg_time) as average_ttc_overall  
Hi @livehybrid, Well, strange thing is that mongo7 is working for splunk v10 when installing vanilla. I would strongly guess that the update even does not work on intel macs also, because the mongo... See more...
Hi @livehybrid, Well, strange thing is that mongo7 is working for splunk v10 when installing vanilla. I would strongly guess that the update even does not work on intel macs also, because the mongo binaries needed for migration are not the in macos intel tarball.  Cheers, Andreas
Yes, you _can_ do that. It's just that managing search filters is less convenient and thus maintenance of search filters is less straightforward than simply selecting index access per role using norm... See more...
Yes, you _can_ do that. It's just that managing search filters is less convenient and thus maintenance of search filters is less straightforward than simply selecting index access per role using normal "allowed index" functionality. And you're still using search-time fields for limiting access and those search-time fields can be overriden by your users. There's no way around that as long as you don't have indexed fields to search by. To be fully honest - you already seem to have a relatively convoluted data architecture (judging from what you're saying) and you're stuck on building on top of that unless you do some rearchitecting, which might need quite a lot of effort. That's the point when you should engage an experienced consultant to take a look at your environment and your requirements and give you a more holistic analysis and recommendations. We can give you hints about how Splunk works and what _can_ be done with it but we won't tell you what you _should_ do in the end because we don't know the whole picture and we're not taking responsibility for the final decisions.
Yes that should be left as is.
Why can't I give srchFilter as the role index by default? What will be the drawback for this? For prod roles I need to mention summary index condition as well with their restricted service... How @sp... See more...
Why can't I give srchFilter as the role index by default? What will be the drawback for this? For prod roles I need to mention summary index condition as well with their restricted service... How @splunklearner told.. exactly the same way.. Below is the role created for non-prod   [role_abc]   srchIndexesAllowed = non_prod   srchIndexesDefault = non_prod   srchFilter = (index=non_prod)   Below is the role created for prod   [role_xyz]   srchIndexesAllowed = prod;opco_summary   srchIndexesDefault = prod   srchFilter = (index=prod) OR (index=opco_summary AND (service=juniper-prod OR service=juniper-cont)))
No, I don't think you can modify the behaviour of the collect command and hence the output event format (much). And yes, you'd have to do some magic like index!=your_specific_index OR (index=your_sp... See more...
No, I don't think you can modify the behaviour of the collect command and hence the output event format (much). And yes, you'd have to do some magic like index!=your_specific_index OR (index=your_specific_index AND <additional conditions...) Ugly, isn't it?
Is there any way that I can get rid of double quotes? And one more thing I noticed, if user has access to 10 roles (10 indexes) and we have applied srchFilter to 6 roles, then if they are searching w... See more...
Is there any way that I can get rid of double quotes? And one more thing I noticed, if user has access to 10 roles (10 indexes) and we have applied srchFilter to 6 roles, then if they are searching with other 3 indexes (which are not there in srchFilter), he is not seeing any results. It means, if I use srchFilter by default I need to include it's index name in srchFilter? srchFilter is getting added for all the roles with OR...
Ehh... didn't notice the value was enclosed in quotes. Quotes are major breakers, TERM won't work then.
Hi @schose  A full Splunk Enterprise installation is not currently supported for MacOS (See https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/10.0/plan-your-splunk-enterpr... See more...
Hi @schose  A full Splunk Enterprise installation is not currently supported for MacOS (See https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/10.0/plan-your-splunk-enterprise-installation/system-requirements-for-use-of-splunk-enterprise-on-premises) - Only the UF package is supported, this could be for a number of reasons however MongoDB 5.0+ requires a CPU with AVX support which Silicon macs do not support (with Intel macs aging out).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @addOnGuy  Try hitting the bump endpoint to clear the internal web cache. https://yourSplunkInstance/en-US/_bump Then click the "Bump Version" button:  Did this answer help you? If so, p... See more...
Hi @addOnGuy  Try hitting the bump endpoint to clear the internal web cache. https://yourSplunkInstance/en-US/_bump Then click the "Bump Version" button:  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
But when I am trying to use TERM for service field, values are not returning. Service field is still there in my raw summary event. Not sure what went wrong (index=prod) OR (index=opco_summary AND (... See more...
But when I am trying to use TERM for service field, values are not returning. Service field is still there in my raw summary event. Not sure what went wrong (index=prod) OR (index=opco_summary AND (TERM(service=JUNIPER-PROD))   Even checked only with summary index and term with service not working This is my raw data for summary index -- I have extracted service from original index and given |eval service = service and then collected in summary index... 07/31/2025 04:59:56 +0000, search_name="decode query", search now 1753938000.000, info min_time=1753937100.000, info_max_time=1753938000. info_search_time=1753938000.515, uri="/wasinfkeepalive.jsp", fqdn-"p3bmm-eu.systems.uk.many-44", service="JUNIPER-PROD", vs_name="tenant/juniper/services/jsp" XXXXXX  
Thank you livehybrid Good to know.   Regards, Harry
@dwong-rtr  Splunk Cloud restricts customization of the “From” address for triggered alert emails. The default sender (alerts@splunkcloud.com) is hardcoded and cannot be changed via the UI or config... See more...
@dwong-rtr  Splunk Cloud restricts customization of the “From” address for triggered alert emails. The default sender (alerts@splunkcloud.com) is hardcoded and cannot be changed via the UI or configuration files. But you can consider an option to set up an internal SMTP relay that receives emails from Splunk Cloud and re-sends them using your internal service address. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@addOnGuy  -Try clearing your browser cache or using an incognito window -Check both default and local directories inside your add-on - Old parameters might be lingering in local. -Restart Splunk ... See more...
@addOnGuy  -Try clearing your browser cache or using an incognito window -Check both default and local directories inside your add-on - Old parameters might be lingering in local. -Restart Splunk If all else fails, try exporting the current version and re-importing it into Add-on Builder as a fresh projectand try. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@arvind_Sugajeev I also got the `You do not have permission to share objects at the system level` response when providing only `owner`. I resolved it including `owner`, `share`, and `perms.read`.
This worked for me, thank you very much
Kindly repeat the step again  "select the forwarders" then when it comes to selecting the server class dont create a new one just select "existing"  and select the previous one you created and the "l... See more...
Kindly repeat the step again  "select the forwarders" then when it comes to selecting the server class dont create a new one just select "existing"  and select the previous one you created and the "local events logs"  will appear. 
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to someh... See more...
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to somehow show the parameters I used for a prior version. I've checked the global config json file and everywhere else I could think of and they all reflect the parameters for the new version. Despite that, the UI still shows the old parameters. Does anyone have any idea why or where else I could check?
Thanks for replying but no encryption. I used the modular frame of the python script that it gave me as a template.
Are you saying if you run that second search in a different app context, the behaviour is different. Note that your SPL logic to do stats earliest(_time) as min_time will not tell you the actual sea... See more...
Are you saying if you run that second search in a different app context, the behaviour is different. Note that your SPL logic to do stats earliest(_time) as min_time will not tell you the actual search range, just the time of the earliest event it found. Try the SPL  ... | stats min(_time) as min_time max(_time) as max_time by index | convert ctime(min_time) ctime(max_time) | addinfo The addinfo command will show you the actual search range used by the search irrespective of any events found.