Hi splunkers ! I got a question about memory. In my splunk monitoring console, I get approx 90% of memory used by splunk processes. The amount of memory is 48 Gb In my VCenter, I can see th...
See more...
Hi splunkers ! I got a question about memory. In my splunk monitoring console, I get approx 90% of memory used by splunk processes. The amount of memory is 48 Gb In my VCenter, I can see that only half of the assigned memory is used (approx 24 Gb over 48Gb available). Who is telling me the truth : Splunk monitoring or Vcenter. And overall, is there somthing to configure in Splunk to fit the entire available memory. Splunk 9.2.2 / redhat 7.8 Thank you . Olivier.
Hey hgarnica, i have the same issue, like i was not able to run the search from powerbi, what type of modifications or permissions i need to provide and how will be the sample url for connecting t...
See more...
Hey hgarnica, i have the same issue, like i was not able to run the search from powerbi, what type of modifications or permissions i need to provide and how will be the sample url for connecting the splunk as i was using it with https://hostname:8089 --> do we need to give any specific app names like that. Thanks in-advance for awaiting for your response.
i have created a stacked bar based on a data source (query) and everything works with the exception of: i have to select each data value to display when the query runs through Data Configuration - Y...
See more...
i have created a stacked bar based on a data source (query) and everything works with the exception of: i have to select each data value to display when the query runs through Data Configuration - Y meaning all of my desired values show up there but they are not "selected" by default so the chart is blank until i select them?
My query is index=stuff | search "kubernetes.labels.app"="some_stuff" "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath "log.request.path"| rename "log.request.path" as u...
See more...
My query is index=stuff | search "kubernetes.labels.app"="some_stuff" "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath "log.request.path"| rename "log.request.path" as url | convert timeformat="%Y/%m/%d" ctime(_time) as date | stats min("log.context.duration") as RT_fastest max("log.context.duration") as RT_slowest p95("log.context.duration") as RT_p95 p99("log.context.duration") as
RT_p99 avg("log.context.duration") as RT_avg count(url) as Total_Req by url And i am getting the attached screenshot response. I want to club all the similar api's like all the /getFile/* as one API and get the average time
Hi I have events that having multiple countries... I want to count the country field and with different time range. It is need to sort by highest country to lowest. EX Country Last 24h ...
See more...
Hi I have events that having multiple countries... I want to count the country field and with different time range. It is need to sort by highest country to lowest. EX Country Last 24h Last 30 days Last 90 days US 10 50 100 Aus 8 35 80 I need query kindly assist me.
I have ingested data form influx DB to Splunk Enterprise using influxDB add from splunk db connect. Performing InfluxQL search in SQL explorer of created influx connection. I am getting empty values...
See more...
I have ingested data form influx DB to Splunk Enterprise using influxDB add from splunk db connect. Performing InfluxQL search in SQL explorer of created influx connection. I am getting empty values for value column. Query: from(bucket: "buckerName") |> range(start: -6h) |> filter(fn: (r) => r._measurement == "NameOfMeasurement") |>filter(fn: (r) => r._field == "value") |> yield(name: "count") Splunk DBX Add-on for InfluxDB JDBC
Hi @Poojitha You Can Add Multiple Tokens in the Same Configuration Page ! Please refer the Image that i am Attaching, is this what you are looking for ??
Hi @gcusello , Thanks for the feedback.. Wanted to understand , if we are changing OS on first physical node after restoring the backup. So this node will not be running on Red hat but other 3 s...
See more...
Hi @gcusello , Thanks for the feedback.. Wanted to understand , if we are changing OS on first physical node after restoring the backup. So this node will not be running on Red hat but other 3 still running on centos. Will this node be part of clustering? Can servers with different OS part of same cluster? Thanks
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6. Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxl...
See more...
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6. Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxls.py file in the app: try: csv_to_xls(os.environ['SPLUNK_HOME'] + "/etc/apps/app_name/appserver/static/fileXLS/" + output)Tried encoding by appending .encode(encode('utf-8') to the string -> not working Tried importing the SIX and FUTURIZE/MODERNIZE libraries and ran the code to "upgrade" the script: it just added the and changed a line --> not working from __future__ import absolute_import Tried to define each variable, and some other --> not working splunk_home = os.environ['SPLUNK_HOME'] static_path = '/etc/apps/app_name/appserver/static/fileXLS/' output_bytes = output csv_to_xls((splunk_home + static_path.encode(encoding='utf-8') + output)) I sort of rely on this app to work, any kind of help is needed! Thanks!
@mrilvan there is only a Splunk app for that at the moment and nothing on the SOAR Side. However if the API is available there is nothing stopping you building a custom app in the platform as I am su...
See more...
@mrilvan there is only a Splunk app for that at the moment and nothing on the SOAR Side. However if the API is available there is nothing stopping you building a custom app in the platform as I am sure XSOAR is just another REST API.
You appear to be missing part of the answer - hot and warm buckets are normally stored on expensive fast storage, whereas (in order to reduce costs) cold buckets are stored on cheaper slower storage....
See more...
You appear to be missing part of the answer - hot and warm buckets are normally stored on expensive fast storage, whereas (in order to reduce costs) cold buckets are stored on cheaper slower storage. Using these distinctions, Splunk gives organisations the flexibility to manage the cost of their storage infrastructure.
Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning messa...
See more...
Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning message when starting Splunk: WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Could someone please help me resolve this issue? I want to ensure that Splunk uses the correct SSL certificates and that the hostname validation works properly.
How to see the preview of Splunk AI APP, We already accept terms and conditions but still we didn't get any mail notification to install the Splunk AI APP from Splunk Support.