All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Has anyone experienced this kind of broken UI on Dashboard Studio? I've tried to restart Splunk but it's still happening.
I have a 2015 log that I need to analyze  I have a 2015 Aruba log I need to analyze.  The log does not have the year, so Splunk inputs the current year (2023).  Is there a way to adjust the year? H... See more...
I have a 2015 log that I need to analyze  I have a 2015 Aruba log I need to analyze.  The log does not have the year, so Splunk inputs the current year (2023).  Is there a way to adjust the year? Here is a sample of my log (IP, MAC, and Name of Devices are XXX out).   Mar     14        12:37:13         stm[XXXXXX]:          |AP      XXXXXX        stm|     Auth    request:            XXXX AP       XXX    auth_alg         0                                                                                                                      Mar     14        12:37:13         stm[XXXXXX]:          |AP      XXXXXX        stm|     Auth    success:            XXXXXX:       AP       XXXXXX                                                                                                                                Mar     14        12:37:33         stm[XXXXXX]:          |AP      XXXXXX        stm|     Deauth           to         sta:            XXXXXX:       Ageout           AP Here is what Splunk has Thank you.    
Hello Splunkers, I configured Splunk to read the paid GeoIP2 Enterprise database by adding the [iplocation] stanza to the limits.conf for Search App and Indexer.  "db_path = /Path/to/database/GeoIP2... See more...
Hello Splunkers, I configured Splunk to read the paid GeoIP2 Enterprise database by adding the [iplocation] stanza to the limits.conf for Search App and Indexer.  "db_path = /Path/to/database/GeoIP2-Enterprise.mmdb" I also went in Splunk Web and uploaded the mmdb file in Settings>lookups>GeoIP lookups file. After a quick Splunk restart, Splunk is still using the free geoip database that came preinstalled with SPlunk. Anyone successfully integrated the Maxmind GeoIP2 Enterprise database with Splunk Enterprise v9? Additionally, can I use the iplocation command to parse out the new fields from the GeoIP2-Enterprise database such as connection_type, user_type, country_confidence, etc. Thank you!  
Hello, How I could figure out whether my indexed  DLP data is CIM compliant or not in my Splunk ES.  
Hello, We have been facing a weird issue in Splunk enterprise versions of 9.x.x,  For all the dashboards where we use tags like <i class="CLASS NAME" /> iframe tags are not working as expected.... See more...
Hello, We have been facing a weird issue in Splunk enterprise versions of 9.x.x,  For all the dashboards where we use tags like <i class="CLASS NAME" /> iframe tags are not working as expected.   <i class="CLASS NAME" /> -------- Splunk is not recognizing this as auto closing tag <i class="CLASS NAME" ></i> --- When we change to this it is working. But in earlier Splunk versions usage of this was fine <i class="CLASS NAME" /> without any errors. we have used 1000's of i frame tags in lot of splunk applications where we cant change them now, Is there any core splunk file which can be changed and make this work as earlier versions. 
Hello  I need some assistance please with the alert throttle functionality in splunk   Even though we have the  alert throttle enabled & suppressed for 60mins the alert still seems to generate... See more...
Hello  I need some assistance please with the alert throttle functionality in splunk   Even though we have the  alert throttle enabled & suppressed for 60mins the alert still seems to generate a trigger every 10mins  @ 00:10, 00:20, 00:30, 00:40, and 00:50 I only want the 00:10 event to trigger & then suppress the 00:20, 00:30, 00:40 & 00:50 events.   Thank you in advance Veeru
It seems even the latest  appdynamics-gradle-plugin:23.2.1 is not compatible with the lastest Android Gradle Plugin com.android.tools.build:gradle:8.0.0  as the Transform API is removed: https://... See more...
It seems even the latest  appdynamics-gradle-plugin:23.2.1 is not compatible with the lastest Android Gradle Plugin com.android.tools.build:gradle:8.0.0  as the Transform API is removed: https://developer.android.com/reference/tools/gradle-api/7.2/com/android/build/api/transform/Transform An exception occurred applying plugin request [id: 'adeum'] > Failed to apply plugin 'adeum'. > API 'android.registerTransform' is removed. Is there anything i can do besides downgrading the Android Gradle Plugin? cheers, Jerg
I have a field called APM_ID and i want to get the output for only APMs from this field (for eg: A1002, A0001) and want to group the rest of the output as "shared service". What query can i write for... See more...
I have a field called APM_ID and i want to get the output for only APMs from this field (for eg: A1002, A0001) and want to group the rest of the output as "shared service". What query can i write for the desired output? APM_ID ABCDE-FVG-HH HBBB-NDBXB-SM A1001 SBSKS A0002 JJSKM A0009 A2002
Hello, Trying to complete a search that uses metrics to monitor when a device has not been connected for the last 90 days. | mcatalog values(id) WHERE index=AM AND metric_name=CN AND type="devi... See more...
Hello, Trying to complete a search that uses metrics to monitor when a device has not been connected for the last 90 days. | mcatalog values(id) WHERE index=AM AND metric_name=CN AND type="device" by id | table id This shows the devices that are currently connected. I have an input lookup with the device inventory as Device_Inv.csv Is there a way to create a search that looks at the lookup table and uses metrics to see if it has not been online for 90 days or above? Many thanks
Hello Everyone, Below is the set of the log response pattern: "message":{"input":"999.111.000.999 - - [06/Apr/2023:05:45:51 +0000] \"GET /shopping/carts/v1/83h3h331-g494-28h4-yyw7-dq123123123d/su... See more...
Hello Everyone, Below is the set of the log response pattern: "message":{"input":"999.111.000.999 - - [06/Apr/2023:05:45:51 +0000] \"GET /shopping/carts/v1/83h3h331-g494-28h4-yyw7-dq123123123d/summary HTTP/1.1\" 200 636 8080 13 ms"} "message":{"input":"999.111.000.999 - - [06/Apr/2023:04:08:13 +0000] \"GET /shopping/carts/v1/83h3h331-g494-28h4-yyw7-dq123123123d HTTP/1.1\" 200 1855 8080 10 ms"} "message":{"input":"999.111.000.999 - - [06/Apr/2023:04:08:13 +0000] \"GET /shopping/carts/v1/73737373-j3j3-8djd-jdjd-kejdjehi3nej/product HTTP/1.1\" 200 1855 8080 10 ms"} "message":{"input":"999.111.000.999 - - [06/Apr/2023:04:08:13 +0000] \"GET /location-context/stations/v1/CJS?module=ONLINE_BOOKING&requestedPoint=DESTINATION HTTP/1.1\" 200 1855 8080 10 ms"} From the above, I am interested to extract only the orange highlighted string eg:  GET /shopping/carts/v1/<ending with any id alone> HTTP I tried with below splunk query as intermediate step to extract the urls: index=my_index openshift_cluster="cluster009" sourcetype=openshift_logs openshift_namespace=my_ns openshift_container_name=contaner | rex field=message.input "(?<servicename>(?:[^\"]|\"\")*HTTP)" | dedup servicename | stats count by servicename servicename is pre-extracted variable But this query returns the all pattern. GET /shopping/carts/v1/83h3h331-g494-28h4-yyw7-dq123123123d/summary HTTP GET /shopping/carts/v1/83h3h331-g494-28h4-yyw7-dq123123123d HTTP (I need only this) GET /shopping/carts/v1/73737373-j3j3-8djd-jdjd-kejdjehi3nej/product HTTP GET /location-context/stations/v1/CJS?module=ONLINE_BOOKING&requestedPoint=DESTINATION HTTP Please help
I have a field extracted with transforms called Parent_Process. I set up a field alias Parent_Process as parent_process If I name the alias as anything alphabetically up to "parent_process" the a... See more...
I have a field extracted with transforms called Parent_Process. I set up a field alias Parent_Process as parent_process If I name the alias as anything alphabetically up to "parent_process" the alias does not work. If I name the alias anything from "parent_procest" (replace last s with t), or any other name alphabetically later than "parent_process" (tried about 5 variants), then it DOES work. There is only this SINGLE alias - it has global scope. btool with app context showing list of props, does not seem to show anything different other than the name of the alias... I thought aliases were done after transforms, so just can't understand why this happens like this. Any idea what I am missing?  
I have two events one is  Index=x source type= xx "String" extacted fields s like manid,actionid,batch I'd 2nd event  Index=y source type=y " string recived" extacted fields like manid ,actioni... See more...
I have two events one is  Index=x source type= xx "String" extacted fields s like manid,actionid,batch I'd 2nd event  Index=y source type=y " string recived" extacted fields like manid ,actionid   Calculate the time from 2nd event -1 event .  While calculating the time mandid should same   
0 I have an issue after upgrading the Splunk Enterprise version to the latest version (9.0.4.1), once we upgraded the Splunk we got a warning alert below: WARNING: Server Certificate Hostname... See more...
0 I have an issue after upgrading the Splunk Enterprise version to the latest version (9.0.4.1), once we upgraded the Splunk we got a warning alert below: WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details Then we configured the cliVerifyServerName as suggested. [sslConfig] cliVerifyServerName=true sslRootCAPath=/opt/splunk/etc/auth/splunkweb/ourcertificate.crt But after that, we got an error below. ERROR: certificate validation: self signed certificate in certificate chain Encountered some errors while trying to obtain shcluster status. Couldn't complete HTTP request: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed What do I need to solve the issue? Thanks.
Hi, I had tried to recreate Prometheus metrics graphs from Grafana in Splunk. However, I am getting offsets for the value of certain queries as shown in the case below: Case 1: Queries that a... See more...
Hi, I had tried to recreate Prometheus metrics graphs from Grafana in Splunk. However, I am getting offsets for the value of certain queries as shown in the case below: Case 1: Queries that are using irate in PromQL Eg: PromQL Queries: (sum by(instance) (irate(node_cpu_seconds_total{instance="$node",job="$job", mode!="idle"}[$__rate_interval])) / on(instance) group_left sum by (instance)((irate(node_cpu_seconds_total{instance="$node",job="$job"}[$__rate_interval])))) * 100 Result: Splunk Queries: | mstats rate_sum(node_cpu_seconds_total) as seconds_total where index=<index_name> by job instance mode span=15s | sort - _time | dedup mode | stats sum(seconds_total) as seconds_total sum(eval(if(mode!="idle",seconds_total,0))) as cpu_busy | eval "CPU Busy" = round((cpu_busy / seconds_total) * 100,2) | fields "CPU Busy" Result:   2. Queries that are not using irate in PromQL: PromQL Queries: avg(node_load5{instance="$node",job="$job"}) /  count(count(node_cpu_seconds_total{instance="$node",job="$job"}) by (cpu)) * 100 Result:   Splunk Queries: | mstats avg("node_load5") prestats=true WHERE index=<index_name> span=15s | table _time psrsvd_sm_node_load5 | sort - _time | stats first(psrsvd_sm_node_load5) as latest_psrsvd_sm_node_load5 | join type=inner[| mstats count(node_cpu_seconds_total) prestats=true WHERE index=<index_name> by cpu span=15s | table cpu psrsvd_ct_node_cpu_seconds_total | dedup cpu | stats count(cpu) as cpu_count | table cpu_count] | eval sys_load=latest_psrsvd_sm_node_load5 / cpu_count * 100 | sort - sys_load | table sys_load Result:     Can I check is there answers or solutions on the following questions please? What is the irate equivalent aggregate functions that we should use in Splunk. For example: rate, rate_sum, rate_avg or another that is not listed? What might be the cause of the values in Grafana and Splunk having an offset? Thank you very much.
Can I take the Power User Exam without getting the User Certification? I see a few answers online but nothing firm from Splunk. Pearson VUE seems to let me schedule it but it is also letting me sched... See more...
Can I take the Power User Exam without getting the User Certification? I see a few answers online but nothing firm from Splunk. Pearson VUE seems to let me schedule it but it is also letting me schedule Admin.  Thank you.
How can we execute opentelemetry command as a regular user, please?   The given command requires root access.   https://opentelemetry.io/docs/collector/getting-started/   sudo yum updates... See more...
How can we execute opentelemetry command as a regular user, please?   The given command requires root access.   https://opentelemetry.io/docs/collector/getting-started/   sudo yum updatesudo yum -y install wget systemctlwget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.75.0/otelcol_0.75.0_linux_amd64.rpmsudo rpm -ivh otelcol_0.75.0_linux_amd64.rpm sudo systemctl start otelcol
i am new to dashboard studio, i have 15 panels in my dashboard and i want to create a dropdown with 3 options  as success, warnings and error success- 5 panels Warnings- 5 panels error-5 panels.... See more...
i am new to dashboard studio, i have 15 panels in my dashboard and i want to create a dropdown with 3 options  as success, warnings and error success- 5 panels Warnings- 5 panels error-5 panels. So when i select success from the dropdown, only Success-5panels should display. when i select errors only error related panels should show up.
Hi Team,   We have a custom which is creating incident in SNOW , if the alert triggered from Splunk . But from past few days its not happening and getting this error . "ERROR sendmodalert [4016 Al... See more...
Hi Team,   We have a custom which is creating incident in SNOW , if the alert triggered from Splunk . But from past few days its not happening and getting this error . "ERROR sendmodalert [4016 AlertNotifierWorker-0] - action=sn_sec_incident_alert_admin STDERR - Failed to create an incident in snow".   Your help is much needed and appreciated 
Our deployment of the Cisco eStreamer Add-on , installed on a Heavy Forwarder appears to be working properly in general. However after a few days collecting data and sending it to Splunk in the clou... See more...
Our deployment of the Cisco eStreamer Add-on , installed on a Heavy Forwarder appears to be working properly in general. However after a few days collecting data and sending it to Splunk in the cloud, the splencore Python application stops working, even though all processes are still showing as "running". At that point, data stops flowing into the indexers and nothing shows up in the search heads. As soon as we restart the eStreamer client using the following command, everything starts working again. /opt/splunk/etc/apps/TA-eStreamer/bin/splencore.sh restart Has anybody else experienced similar issues with the eStreamer Add-on?
I am in the process of normalizing data, so I can apply it to a data model. One of the fields which is having issues is called user. I have user data in some logs, while other logs have an empty u... See more...
I am in the process of normalizing data, so I can apply it to a data model. One of the fields which is having issues is called user. I have user data in some logs, while other logs have an empty user field - but do have data in a src_user field. Tried using the coalesce command - but that does not seem to work. EVAL-user = coalesce(user, src_user)   Is it because I am trying to reference the user field?  Are there any other work arounds to create a user field when it is empty and filling data from src_user ? Remember, some logs have valid data in the user field. Others have no data.   Thanks!