All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I find this very confusing, like you have overlapped two seperate issues.
Same problem here, it's difficult to find how a simple option like option name=count works in studio. I want to display more than 100 rows without a next button. After searching in studio dashboar... See more...
Same problem here, it's difficult to find how a simple option like option name=count works in studio. I want to display more than 100 rows without a next button. After searching in studio dashboard found it and indeed it is simple. Data display -  rows displayed
Create an app to be pushed from the CM to the IDX tier and put in an inputs.conf file. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Inputsconf#HTTP_Event_Collector_.28HEC.29_-_Local_stan... See more...
Create an app to be pushed from the CM to the IDX tier and put in an inputs.conf file. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Inputsconf#HTTP_Event_Collector_.28HEC.29_-_Local_stanza_for_each_token [http://sc4s] token = XXXXX index = target-index-name ### This is the bare minimum I suggest ### SC4S may require a sourcetype, other vendor sources may already come with that value assigned
Where have youe configured these settings? And how do you pull/push the data to your indexer? Are there any HeavyForwarders involved? Feel free to share an sample event with us.
Hello, I'm figuring out the best way to address the above situation. We do have a huge multisite cluster with 10 indexers on each site, a dedicated instance should act as the sc4s instance and send ... See more...
Hello, I'm figuring out the best way to address the above situation. We do have a huge multisite cluster with 10 indexers on each site, a dedicated instance should act as the sc4s instance and send everything to a load balancer whose job will be to forward everything to the cluster.  now, there are several documentations about the implementation but I still can't wrap my head around the direct approach.  the SC4S config stanza would currently look something like this :   [http://SC4S] disabled = 0 source = sc4s sourcetype = sc4s:fallback index = main indexes = main, _metrics, firewall, proxy persistentQueueSize = 10MB queueSize = 5MB token = XXXXXX     several questions about that tho: - I'd need to create a hec token first, before configuring SC4S, but in a clustered environment - where do I create the hec token? I've read that I should create it on the CM and then push it to the peers but how exactly? I can't find much info about the specifics. especially since I try to configure it via config files.. so an example of the correct stanza that has to be pushed out would be somehow great - just can't find any.  - once pushed I need to configure the sc4s on the other side including the generated token (as seen above), does the config here seem correct? theres a lack of example configs so I'm spitballing here a little bit.   Kind regards
| eval url=if(mvindex(split(url,"/"),1)="getFile","/getFile",url)
This gets resolved somehow. Now I am trying to connect the remote desktop (my peer's Splunk with my power bi) to do a POC. I almost tried everything but I am unable to connect with other Splunk. I ca... See more...
This gets resolved somehow. Now I am trying to connect the remote desktop (my peer's Splunk with my power bi) to do a POC. I almost tried everything but I am unable to connect with other Splunk. I can only connect with my own.  @ashvinpandey  Sir, can you please guide me?
I want all the api's with /reports/getFile/* grouped as one and then take the p95,p99,average,count etc. I don't want them as separate entries. Since the endpoint is same and only the id differes.  ... See more...
I want all the api's with /reports/getFile/* grouped as one and then take the p95,p99,average,count etc. I don't want them as separate entries. Since the endpoint is same and only the id differes.  /getFile/1 /getFile/2 /getFile/3 this should be grouped as 1 like /getFile - and all the p95,p99,count should be calculated as p95/p99/sum of all the three. /getFile - count(3, since /1,/2,/3), p95(p95 of all the three calculated as 3 similar api call /getFile/*) and so on
I want all the api's with /reports/getFile/* grouped as one and then take the p95,p99,average,count etc. I don't want them as separate entries. Since the endpoint is same and only the id differes.  ... See more...
I want all the api's with /reports/getFile/* grouped as one and then take the p95,p99,average,count etc. I don't want them as separate entries. Since the endpoint is same and only the id differes.  /getFile/1 /getFile/2 /getFile/3 this should be grouped as 1 like /getFile - and all the p95,p99,count should be calculated as p95/p99/sum of all the three. /getFile - count(3, since /1,/2,/3), p95(p95 of all the three calculated as 3 similar api call /getFile/*) and so on
| bin _time span=1d | stats count(eval(_time>=relative_time(now(),"@d-1d"))) as 24hCount count(eval(_time>=relative_time(now(),"@d-30d"))) as 30dCount count(eval(_time>=relative_time(now(),"@d-90d")... See more...
| bin _time span=1d | stats count(eval(_time>=relative_time(now(),"@d-1d"))) as 24hCount count(eval(_time>=relative_time(now(),"@d-30d"))) as 30dCount count(eval(_time>=relative_time(now(),"@d-90d"))) as 90dCount by Country
I am trying to use the credentials of my friend to log into Splunk Enterprise, and I am unable to do that.  Also, I am using ODBC to connect Splunk with Power BI, and when I do that locally, I am ab... See more...
I am trying to use the credentials of my friend to log into Splunk Enterprise, and I am unable to do that.  Also, I am using ODBC to connect Splunk with Power BI, and when I do that locally, I am able to do that, but when I am trying to do that remotely, I am unable to do that. I am having issues with server URL and port number. Any help would be appreciated to solve these queries. TIA.
Essentially you need to extract from the url field the part that you want. For example, is it always the first two parts, or fewer, or only applied to particular urls? Please describe your requiremen... See more...
Essentially you need to extract from the url field the part that you want. For example, is it always the first two parts, or fewer, or only applied to particular urls? Please describe your requirement in more detail.
Please can you share the source of (the relevant parts of) your dashboard so we can see what settings you have used.
Hi splunkers !   I got a question about memory.    In my splunk monitoring console, I get approx 90% of memory used by splunk processes. The amount of memory is 48 Gb In my VCenter, I can see th... See more...
Hi splunkers !   I got a question about memory.    In my splunk monitoring console, I get approx 90% of memory used by splunk processes. The amount of memory is 48 Gb In my VCenter, I can see that only half of the assigned memory is used (approx 24 Gb over 48Gb available).   Who is telling me the truth : Splunk monitoring or Vcenter. And overall, is there somthing to configure in Splunk to fit the entire available memory.   Splunk 9.2.2 / redhat 7.8 Thank you .   Olivier.
Hey hgarnica, i have the same issue, like i was not able to run the search from powerbi, what type of modifications or permissions i need to provide and how will be the sample url for connecting t... See more...
Hey hgarnica, i have the same issue, like i was not able to run the search from powerbi, what type of modifications or permissions i need to provide and how will be the sample url for connecting the splunk as i was using it with https://hostname:8089 --> do we need to give any specific app names like that. Thanks in-advance for awaiting for your response.
i have created a stacked bar based on a data source (query) and everything works with the exception of: i have to select each data value to display when the query runs through Data Configuration - Y... See more...
i have created a stacked bar based on a data source (query) and everything works with the exception of: i have to select each data value to display when the query runs through Data Configuration - Y meaning all of my desired values show up there but they are not "selected" by default so the chart is blank until i select them?
Has there been any futhure information regarding this error? I am still unable to install the app in Slunk.
My query is    index=stuff | search "kubernetes.labels.app"="some_stuff" "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath "log.request.path"| rename "log.request.path" as u... See more...
My query is    index=stuff | search "kubernetes.labels.app"="some_stuff" "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath "log.request.path"| rename "log.request.path" as url | convert timeformat="%Y/%m/%d" ctime(_time) as date | stats min("log.context.duration") as RT_fastest max("log.context.duration") as RT_slowest p95("log.context.duration") as RT_p95 p99("log.context.duration") as RT_p99 avg("log.context.duration") as RT_avg count(url) as Total_Req by url   And i am getting the attached screenshot response. I want to club all the similar api's like all the /getFile/* as one API and get the average time
It did not work It is still giving all the events other than the expected one.
Hi  I have events that having multiple countries... I want to count the country field and with different time range. It is need to sort by highest country to lowest. EX   Country         Last 24h  ... See more...
Hi  I have events that having multiple countries... I want to count the country field and with different time range. It is need to sort by highest country to lowest. EX   Country         Last 24h     Last 30 days     Last 90 days            US                       10                   50                            100            Aus                       8                     35                              80 I need query kindly assist me.