All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Ben , because, using Inheritance, the new role inherits all permissions from the other one. For example if you create a role inheriting from user, which accesses all non-internal indexes, the n... See more...
Hi @Ben , because, using Inheritance, the new role inherits all permissions from the other one. For example if you create a role inheriting from user, which accesses all non-internal indexes, the new role will have access to all non-internal indexes. Ciao. Giuseppe
We are not s plunk partner. We are customer.So we need the contact details of our S plunk Cloud Account Manager to update our internal records. Could you please provide us with the name, email, and c... See more...
We are not s plunk partner. We are customer.So we need the contact details of our S plunk Cloud Account Manager to update our internal records. Could you please provide us with the name, email, and contact information of our assigned account manager?
Hi @gcusello  I will look at the APP permissions. About your question, I created the role from scratch. Thanks for the tip but why shouldn't I use Inheritance as a general guideline?    Thanks fo... See more...
Hi @gcusello  I will look at the APP permissions. About your question, I created the role from scratch. Thanks for the tip but why shouldn't I use Inheritance as a general guideline?    Thanks for replying Ben
Hello all, We have a requirement to have a common dashboard for all applications. For a application we have max 2 indexes (one for non-prod env FQDNs and one for prod env FQDNs) and users are restri... See more...
Hello all, We have a requirement to have a common dashboard for all applications. For a application we have max 2 indexes (one for non-prod env FQDNs and one for prod env FQDNs) and users are restricted based on index. My doubt is -  1. Can we create common dashboard for all applications (nearly 200+ indexes are there) by giving index=* in base search. My question is we have A to Z indexes but User A has access to only A index. Here if user A gives index=* will Splunk look for A to Z indexes or only A index which they have access to. (because I am afraid that splunk resource wastage.) 2. We have seperate role called test engineer who has access to all indexes (A to Z). Is this a good idea to have common dashboard for all because if engineer loads the data all indexes will be loaded which in return cause performance issue for users? 3. We have app_name in place. Can I exclude index=* in base search and give app_name = "*app_name*" and app_name is dropdown... so by default * will not be given. Once user selects app_name dashboard will be populated? 4. Or having separate dashboard for separate applications work? But the ask for them is to have common dashboard. Not sure is this a good practice? Please enlighten me with your thoughts and the best approach.
Always, always describe your use case in terms of data.  Without data, it is very difficult for another person to understand what you are trying to achieve.  Let me give you a starter. Suppose your ... See more...
Always, always describe your use case in terms of data.  Without data, it is very difficult for another person to understand what you are trying to achieve.  Let me give you a starter. Suppose your raw data is A B C _time 1 38 299 2025-02-04 23:59:00 2 36 296 2025-02-04 23:56:00 3 34 291 2025-02-04 23:51:00 4 32 284 2025-02-04 23:44:00 5 30 275 2025-02-04 23:35:00 6 28 264 2025-02-04 23:24:00 7 26 251 2025-02-04 23:11:00 8 24 236 2025-02-04 22:56:00 9 22 219 2025-02-04 22:39:00 10 20 200 2025-02-04 22:20:00 11 18 179 2025-02-04 21:59:00 12 16 156 2025-02-04 21:36:00 13 14 131 2025-02-04 21:11:00 14 12 104 2025-02-04 20:44:00 15 10 75 2025-02-04 20:15:00 16 8 44 2025-02-04 19:44:00 17 6 11 2025-02-04 19:11:00 18 4 -24 2025-02-04 18:36:00 19 2 -61 2025-02-04 17:59:00 20 0 -100 2025-02-04 17:20:00 21 -2 -141 2025-02-04 16:39:00 22 -4 -184 2025-02-04 15:56:00 23 -6 -229 2025-02-04 15:11:00 24 -8 -276 2025-02-04 14:24:00 25 -10 -325 2025-02-04 13:35:00 This mock sequence spans roughly three 4-hour intervals.  Now, if you bucket the sequence into 4-hour bins,   | bin _time span=4h@h   You get A B C _time 1 38 299 2025-02-04 20:00 2 36 296 2025-02-04 20:00 3 34 291 2025-02-04 20:00 4 32 284 2025-02-04 20:00 5 30 275 2025-02-04 20:00 6 28 264 2025-02-04 20:00 7 26 251 2025-02-04 20:00 8 24 236 2025-02-04 20:00 9 22 219 2025-02-04 20:00 10 20 200 2025-02-04 20:00 11 18 179 2025-02-04 20:00 12 16 156 2025-02-04 20:00 13 14 131 2025-02-04 20:00 14 12 104 2025-02-04 20:00 15 10 75 2025-02-04 20:00 16 8 44 2025-02-04 16:00 17 6 11 2025-02-04 16:00 18 4 -24 2025-02-04 16:00 19 2 -61 2025-02-04 16:00 20 0 -100 2025-02-04 16:00 21 -2 -141 2025-02-04 16:00 22 -4 -184 2025-02-04 12:00 23 -6 -229 2025-02-04 12:00 24 -8 -276 2025-02-04 12:00 25 -10 -325 2025-02-04 12:00 If you do stats/timechart on this, you get what you get. Now, what do you mean by "adjust the starting point of the spans"?  What will the bucketed sequence look like?  Give a concrete example using this dataset. You can reproduce the above sequence using the following code:   | makeresults count=25 | streamstats count as A | eval _time = strptime("2025-02-05", "%F") - 60 * A * A | eval B = 40 - 2 * A, C = 300 - A * A | bin _time span=4h@h    
As @PickleRick says, do not use append unless absolutely necessary.  I'd suggest a direct expression to do what you want: index=index1 OR (index=index2 sourcetype="api") | eval EventId = coalesce(Ev... See more...
As @PickleRick says, do not use append unless absolutely necessary.  I'd suggest a direct expression to do what you want: index=index1 OR (index=index2 sourcetype="api") | eval EventId = coalesce(EventId, Number__c) | stats count values(index) as indices by EventId | search count < 2 indices = index1  
Thanks all of you for your help. I uninstalled and reinstalled splunk enterprise and realized it was splunk forwarder that was giving me the issue. Apparently, i used a wrong port for the receiving i... See more...
Thanks all of you for your help. I uninstalled and reinstalled splunk enterprise and realized it was splunk forwarder that was giving me the issue. Apparently, i used a wrong port for the receiving indexer. Thanks again.
I have created a custom extension that captures the status of the Scheduled Job (e.g., Ready, Running, Others) and sends the data to AppDynamics as 1, 2, etc. respectively. Do we have any feasibilit... See more...
I have created a custom extension that captures the status of the Scheduled Job (e.g., Ready, Running, Others) and sends the data to AppDynamics as 1, 2, etc. respectively. Do we have any feasibility of configuring Health Rules to accommodate the following conditions: TaskName Triggers Failure_PushData_ToWellnessCore_CRON Start at 12 AM and execute every 3 Hrs PushToWellness fromDbtoConsoleApp Start at 5 PM and execute every 3 Hrs Wellnessdatacron starts at 12.01Pm and execute every 1 hr WellnessFailureCRON 9am, 12am, 3pm, 6pm, 10pm, 1am, 5 am NoiseDataSyncCron Start at 11Am and execute every 1hr NoiseWebhookProcessor Start at 11Am and execute every 2hrs I tried configuring Cron with Start time as 0 17/3 * * * and End time 59 23 30 4 * as to accommodate "Start at 5 PM and execute every 3 Hrs " condition as a Health rule schedule but I am getting error as Error creating schedule: Test. Cause: Unexpected end of expression.  Can anyone help me with this?
How can I migrate SmartStore's local storage to a new storage device with no interruption to search and indexing functionality? Could it be as simple as updating homePath one index at a time, restar... See more...
How can I migrate SmartStore's local storage to a new storage device with no interruption to search and indexing functionality? Could it be as simple as updating homePath one index at a time, restarting the indexers, and allowing the cache manager to do the rest? 
Thank you for your response. I'm using a standalone splunk ui toolkit service(https://splunkui.splunk.com/Toolkits). It's linked within splunk app. So splunk ui app folder is added in Splunk/etc/a... See more...
Thank you for your response. I'm using a standalone splunk ui toolkit service(https://splunkui.splunk.com/Toolkits). It's linked within splunk app. So splunk ui app folder is added in Splunk/etc/apps.   I added a page as using the npx splunk/create commend. A page is main menu. and I wanna add sub menu. so I used react-router nested routing. but, This issue occurs When refreshing the page.   Can I make the sub-page with nested routing be recognized by the splunk as well? Or should I use the method of creating pages for individual submenus as well?   I used routing because creating a new page takes a lot of time to load the page. 
Firstly join is almost never a solution to a Splunk problem.  Secondly, you do not have Column1 as an output of your tstats search, so how can it match up Col1+Col2 with the start/end times. Genera... See more...
Firstly join is almost never a solution to a Splunk problem.  Secondly, you do not have Column1 as an output of your tstats search, so how can it match up Col1+Col2 with the start/end times. Generally if you want to enrich the start/end times with info from a lookup, you would run the tstats, then lookup the common fields (Column1)  from the lookup and output the other fields (Column2) If you want to end up with all the rows from the lookup in the output, even where there is no data for some of the rows, you would then do a final couple of commands, i.e. | inputlookup File.csv append=t | stats values(*) as * by Column1 which would then give you all the rows from the lookup and start/end times from data for Column1 found in the tstats search.
Technically you could use both base searches, but it's a bit fiddly and isn't really going to save you anything as the searches have to run anyway. You would get the job ids of each base search and t... See more...
Technically you could use both base searches, but it's a bit fiddly and isn't really going to save you anything as the searches have to run anyway. You would get the job ids of each base search and then in your panel search you would use loadjob to load each of the jobs.  However, you're still going to have to load the second job in some kind of subsearch (join?) so not sure where you're trying to go with this. If you are simply trying to speed up a join search, you can't achieve this with two base searches, as you are simply not changing anything and it will take the time it takes. The solution for a poor performing search using join is to remove the use of join and rewrite the search in another way. Looking at your existing searches I'm not sure why you are trying to combine these in the first place, because you have appcode in your first search and you simply want appcode to get the list of details from the lookup. You are doing a lookup in the primary search but doing nothing with the retrieved data. Why don't you just do the lookup in your primary search after the chart, i.e. index=serverdata | rex "host_name=\"(?&lt;server_host_name&gt;[^\"]*)" |chart dc(host_name) over appcode by host_environment | eval TOTAL_servers=DEV+PAT+PROD | table appcode DEV PAT PROD TOTAL_servers | lookup servers_businessgroup_appcode.csv appcode output Business_Group as New_Business_Group  
@alanzchan  Were you able to find a solution to your problem ?
Hi @gcusello , Unfortunately, I am not in control of the application layer. Web logs could be in any directory on any drive. With a script to check and overwrite the inputs.conf, that will only req... See more...
Hi @gcusello , Unfortunately, I am not in control of the application layer. Web logs could be in any directory on any drive. With a script to check and overwrite the inputs.conf, that will only require a local splunk restart if the location changes so. That usually happens rarely, but we need to capture it. 1. check log location 2. compare with current setting in inputs.conf 3. if different only - update & restart Kind Regards Andre
Yeh for reference I've got that kinda thing, it's super simple. What I'm wondering is, if theres an easy way or even any way, to replicate the popup bubble, so that it'd look like this: C... See more...
Yeh for reference I've got that kinda thing, it's super simple. What I'm wondering is, if theres an easy way or even any way, to replicate the popup bubble, so that it'd look like this: Cause that would be a million times better. It's clear enough I could forget the 5x1m rows... and it's like being handed out for free instantly when clicking on any field in a search...
I don't know CloudWatch but from what I'm reading it uses either its own agent or you're pushing data to its API endpoint. Splunk's UF is obviously not a CloudWatch agent and it can only send out a s... See more...
I don't know CloudWatch but from what I'm reading it uses either its own agent or you're pushing data to its API endpoint. Splunk's UF is obviously not a CloudWatch agent and it can only send out a simple syslog (or syslog-like) output. So your best bet would be probably using two separate agents. Watching the same file should not be that much of a problem (except for rare situations when monitoring a file with just one agent would be problematic).
I'm trying to install the Qualys Technology Add-on (TA) (https://splunkbase.splunk.com/app/2964)  into Splunk Cloud.  I tried downloading from splunkbase and uploading to splunkcloud but received an ... See more...
I'm trying to install the Qualys Technology Add-on (TA) (https://splunkbase.splunk.com/app/2964)  into Splunk Cloud.  I tried downloading from splunkbase and uploading to splunkcloud but received an error stating "This app is available for installation directly from Splunkbase. To install this app, use the App Browser page in Splunk Web." When I try the "Browse More Apps" method, I cannot locate the Qualys TA.  I DO see other Qualys apps such as Qualys FIM, Qualys VM, Qualys CSAM, etc., but I don't see the TA.   What am I missing?
1. Be careful with the append command. It spawns a subsearch and therefore is limited by subsearch constraints (and can get finalized silently without producing full results). In your case you could ... See more...
1. Be careful with the append command. It spawns a subsearch and therefore is limited by subsearch constraints (and can get finalized silently without producing full results). In your case you could either use multisearch since you have only streaming comands or a single search with conditional assignment or evaluation to get EventId properly assigned. index=index1 OR (index=index2 sourcetype=something) | eval EventId=coalesce(EventId,Number__c) (That's assuming that when you have Number__c in your event, you don't have EventId; if it's not the case, you have to use if() or case() with your eval). 2. To not only find if there are two matching events but which of them is missing if there is only one, you have to do it slightly differently. Firstly classify your events | eval classifier=if(index=index1,1,2) Now you can do | stats sum(classifier) by EventId This way you'll get a value of 3 when there are both events, 1 if there is only an event from index1 or 2 if there is only an event from index2.  
Tenable is a company. The right add-on depends on which Tenable products/services you are using.