All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The only reason I can think of for this issue is a permission conflict problem. Did you look at the search.log as mentioned? Try comparing the search.log files for the working and non-working inst... See more...
The only reason I can think of for this issue is a permission conflict problem. Did you look at the search.log as mentioned? Try comparing the search.log files for the working and non-working instances. Without knowing the full search details, it's hard to validate exactly what's going on. There must be settings defined for this sourcetype. Try running the btool command and see if you can find anything relevant there.   splunk btool props list sourcetype --debug     Hope this helps
Not sure what is happening with your log in attempts but in reality I highly recommend you do not enable WebGUI on any indexer.  The cluster should only be managed by the CM since with the WebGUI the... See more...
Not sure what is happening with your log in attempts but in reality I highly recommend you do not enable WebGUI on any indexer.  The cluster should only be managed by the CM since with the WebGUI the ability for configurations to get out of sync is a very high risk.
On Browser Tests we have Auto-retry enabled and when a Test fails, Auto-retry kicks in and updates results,  so on the browser test Page Availability section clicking somewhere could see this flyout... See more...
On Browser Tests we have Auto-retry enabled and when a Test fails, Auto-retry kicks in and updates results,  so on the browser test Page Availability section clicking somewhere could see this flyout "Multiple runs found for Uptime",  How to view this section? (Having hard time )     
Splunk information is a snap shot in time and reflects the reality every 10 seconds. https://docs.splunk.com/Documentation/Splunk/9.3.1/RESTREF/RESTintrospect#server.2Fstatus.2Fresource-usage.2Fiost... See more...
Splunk information is a snap shot in time and reflects the reality every 10 seconds. https://docs.splunk.com/Documentation/Splunk/9.3.1/RESTREF/RESTintrospect#server.2Fstatus.2Fresource-usage.2Fiostats index=_introspection sourcetype=splunk_resource_usage component=Hostwide | eval pct_mem=round(('data.mem_used'/'data.mem')*100,2) | timechart span=10s max(pct_mem) as pct_mem That will give you the overall view. index=_introspection sourcetype=splunk_resource_usage component=PerProcess "data.mem_used"="*" | rename data.* as * | timechart span=10s max(mem_used) as mem_used by process_type This will break it down by process over time.   Review with your VM metrics, perhaps VMC is reporting averages or median per time period.
We have some tokens that are due to expire shortly. Q1: Does the 'Default' token automatically rotate? Q2: How do you manually rotate a token using the dashboard? (I am aware of the API option) Q3... See more...
We have some tokens that are due to expire shortly. Q1: Does the 'Default' token automatically rotate? Q2: How do you manually rotate a token using the dashboard? (I am aware of the API option) Q3: If the API call is the only option, what permissions are required to make the 'rotate' API call? Thanks in anticipation. Ian
Yes those are two separate issues. 
Hi  Can someone please tell me how we can compare the value of a particular day with the value of the same day of last week and create a new field as deviation.  Example :  Below command generates... See more...
Hi  Can someone please tell me how we can compare the value of a particular day with the value of the same day of last week and create a new field as deviation.  Example :  Below command generates the output as below :  | stats sum(Number_Events) as TOTAL by FIeld1 FIeld2  FIeld3 Day  Time Week_of_year Total We need the output like below :  1. In tabular form : Is it possible to have an output like below :  2. If point 1 is possible to be created , then Is it possible to have a time-chart with 3 lines over the 24 hours of the day . Example of data for 3 hours is attached  1 line corresponds to Week of year -2 (39) 2nd line corresponds to Week of year -1 (40) 3rd line corresponds to Week of year (41)   Thanks in advance to help me out.   
I find this very confusing, like you have overlapped two seperate issues.
Same problem here, it's difficult to find how a simple option like option name=count works in studio. I want to display more than 100 rows without a next button. After searching in studio dashboar... See more...
Same problem here, it's difficult to find how a simple option like option name=count works in studio. I want to display more than 100 rows without a next button. After searching in studio dashboard found it and indeed it is simple. Data display -  rows displayed
Create an app to be pushed from the CM to the IDX tier and put in an inputs.conf file. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Inputsconf#HTTP_Event_Collector_.28HEC.29_-_Local_stan... See more...
Create an app to be pushed from the CM to the IDX tier and put in an inputs.conf file. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Inputsconf#HTTP_Event_Collector_.28HEC.29_-_Local_stanza_for_each_token [http://sc4s] token = XXXXX index = target-index-name ### This is the bare minimum I suggest ### SC4S may require a sourcetype, other vendor sources may already come with that value assigned
Where have youe configured these settings? And how do you pull/push the data to your indexer? Are there any HeavyForwarders involved? Feel free to share an sample event with us.
Hello, I'm figuring out the best way to address the above situation. We do have a huge multisite cluster with 10 indexers on each site, a dedicated instance should act as the sc4s instance and send ... See more...
Hello, I'm figuring out the best way to address the above situation. We do have a huge multisite cluster with 10 indexers on each site, a dedicated instance should act as the sc4s instance and send everything to a load balancer whose job will be to forward everything to the cluster.  now, there are several documentations about the implementation but I still can't wrap my head around the direct approach.  the SC4S config stanza would currently look something like this :   [http://SC4S] disabled = 0 source = sc4s sourcetype = sc4s:fallback index = main indexes = main, _metrics, firewall, proxy persistentQueueSize = 10MB queueSize = 5MB token = XXXXXX     several questions about that tho: - I'd need to create a hec token first, before configuring SC4S, but in a clustered environment - where do I create the hec token? I've read that I should create it on the CM and then push it to the peers but how exactly? I can't find much info about the specifics. especially since I try to configure it via config files.. so an example of the correct stanza that has to be pushed out would be somehow great - just can't find any.  - once pushed I need to configure the sc4s on the other side including the generated token (as seen above), does the config here seem correct? theres a lack of example configs so I'm spitballing here a little bit.   Kind regards
| eval url=if(mvindex(split(url,"/"),1)="getFile","/getFile",url)
This gets resolved somehow. Now I am trying to connect the remote desktop (my peer's Splunk with my power bi) to do a POC. I almost tried everything but I am unable to connect with other Splunk. I ca... See more...
This gets resolved somehow. Now I am trying to connect the remote desktop (my peer's Splunk with my power bi) to do a POC. I almost tried everything but I am unable to connect with other Splunk. I can only connect with my own.  @ashvinpandey  Sir, can you please guide me?
I want all the api's with /reports/getFile/* grouped as one and then take the p95,p99,average,count etc. I don't want them as separate entries. Since the endpoint is same and only the id differes.  ... See more...
I want all the api's with /reports/getFile/* grouped as one and then take the p95,p99,average,count etc. I don't want them as separate entries. Since the endpoint is same and only the id differes.  /getFile/1 /getFile/2 /getFile/3 this should be grouped as 1 like /getFile - and all the p95,p99,count should be calculated as p95/p99/sum of all the three. /getFile - count(3, since /1,/2,/3), p95(p95 of all the three calculated as 3 similar api call /getFile/*) and so on
I want all the api's with /reports/getFile/* grouped as one and then take the p95,p99,average,count etc. I don't want them as separate entries. Since the endpoint is same and only the id differes.  ... See more...
I want all the api's with /reports/getFile/* grouped as one and then take the p95,p99,average,count etc. I don't want them as separate entries. Since the endpoint is same and only the id differes.  /getFile/1 /getFile/2 /getFile/3 this should be grouped as 1 like /getFile - and all the p95,p99,count should be calculated as p95/p99/sum of all the three. /getFile - count(3, since /1,/2,/3), p95(p95 of all the three calculated as 3 similar api call /getFile/*) and so on
| bin _time span=1d | stats count(eval(_time>=relative_time(now(),"@d-1d"))) as 24hCount count(eval(_time>=relative_time(now(),"@d-30d"))) as 30dCount count(eval(_time>=relative_time(now(),"@d-90d")... See more...
| bin _time span=1d | stats count(eval(_time>=relative_time(now(),"@d-1d"))) as 24hCount count(eval(_time>=relative_time(now(),"@d-30d"))) as 30dCount count(eval(_time>=relative_time(now(),"@d-90d"))) as 90dCount by Country
I am trying to use the credentials of my friend to log into Splunk Enterprise, and I am unable to do that.  Also, I am using ODBC to connect Splunk with Power BI, and when I do that locally, I am ab... See more...
I am trying to use the credentials of my friend to log into Splunk Enterprise, and I am unable to do that.  Also, I am using ODBC to connect Splunk with Power BI, and when I do that locally, I am able to do that, but when I am trying to do that remotely, I am unable to do that. I am having issues with server URL and port number. Any help would be appreciated to solve these queries. TIA.
Essentially you need to extract from the url field the part that you want. For example, is it always the first two parts, or fewer, or only applied to particular urls? Please describe your requiremen... See more...
Essentially you need to extract from the url field the part that you want. For example, is it always the first two parts, or fewer, or only applied to particular urls? Please describe your requirement in more detail.
Please can you share the source of (the relevant parts of) your dashboard so we can see what settings you have used.