All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This is my first time using Splunk in my environment, we have chosen the Splunk cloud platform. Since, it is my first time, how can I determine the system requirements for a server (Physical or Virt... See more...
This is my first time using Splunk in my environment, we have chosen the Splunk cloud platform. Since, it is my first time, how can I determine the system requirements for a server (Physical or Virtual) before installing the Universal Forwarder?    
What is the reason that Splunk UBA Kafka  give me this error, how can i fix this Kafka topics are not receiving events and Kafka Broker
Ok maybe it is too much Splunk today.  Whatever it is I can not for the life of me remember how to do this. I am doing a basic search on some logs.  I want to show the search term in the table resul... See more...
Ok maybe it is too much Splunk today.  Whatever it is I can not for the life of me remember how to do this. I am doing a basic search on some logs.  I want to show the search term in the table results.  The term is being queried out of the _raw   index=myindex sourcetype=mystuff Environment=thisone "THE_TERM" | top Environment by userid | table Environment, userid   Where and how to I add "THE_TERM" to the table results?
Hello, we have 2 Splunk platforms and we are using _TCP_ROUTING to forward logs. System logs from 1st platform indexers are currently logged on themself.   We want to also receive system logs fro... See more...
Hello, we have 2 Splunk platforms and we are using _TCP_ROUTING to forward logs. System logs from 1st platform indexers are currently logged on themself.   We want to also receive system logs from  indexers of the 1st platform on our 2nd platform however there is no default tcpout group on 1st platform indexers.   So should we create default outputs.conf on 1st platform indexers to continue indexing local system logs?   Thanks for your help.  
Hi All, Thanks for your time, I have a query for getting the number of errors for each client/customer, api_name,time etc index=index_api | stats count by customer,api_name, _time   If i have... See more...
Hi All, Thanks for your time, I have a query for getting the number of errors for each client/customer, api_name,time etc index=index_api | stats count by customer,api_name, _time   If i have the dataset like below - how do i take the snapshot of it and compare in the next 30 minute dataset Client/customer   api_name            _time                                   count Abc                            Validation_V2   2024 oct 29 10.30             10 Xyz                             Testing_V2         2024 oct 29 10.30             15 TestCust                   Testing_V3         2024 oct 29 10.30            20   assuming these are for the last 30 mins...  when i get to the next run say after 30 mins  i want to see if the same dataset is repeated so that i can get the consecutive error count any guidance or helpful suggestions....     
Hello, I need help in creating a search query to filter info showing just our logfile with same error line for all rows. This error code also appear in other days on the same logfile but I don't wa... See more...
Hello, I need help in creating a search query to filter info showing just our logfile with same error line for all rows. This error code also appear in other days on the same logfile but I don't want that to show up. If no other info except this error shows up in the logfile, our app is failing and I need to catch that. c.q.s.c.StoreHourSyncRestController : *** Sync Busy *** Please assist. Thank you! Andie Medalla
Hello,   I would like send two different index data one to indexer and other to Intermediate forwarder. How the configuration needs to be updated in Universal Forwarder ?  Thanks
Good morning, I need help. I have three SearchHead servers. Two of them (SH1 and SH2) are presenting the "_time" field correctly. However, the other server (SH3) is presenting the "_time" field wit... See more...
Good morning, I need help. I have three SearchHead servers. Two of them (SH1 and SH2) are presenting the "_time" field correctly. However, the other server (SH3) is presenting the "_time" field with three hours more than the other two (SH1 and SH2). How do I resolve this? SH2 - Normal SH3 - With three more hours. (With same search range)  
We have an on-prem Splunk-Enterprise Version: 9.0.4.1 We updated IDP url in the SAML configuration and after uploading new IDP certificate to all search heads under .../auth/idpCerts, it was working ... See more...
We have an on-prem Splunk-Enterprise Version: 9.0.4.1 We updated IDP url in the SAML configuration and after uploading new IDP certificate to all search heads under .../auth/idpCerts, it was working for about an hour and then it stopped working getting an error: "Verification of SAML assertion using IDP's cert failed. Unknown signer of SAML response". We logged in to the search heads It was noticed that the idpCert.pem is breaking authentication because the file idpCert.pem updated and we are currently investigating if this is a system related issue. Is this a known issue?
Hi everyone, I'm currently working on integrating Trellix ePolicy Orchestrator (ePO) logs into Splunk for better monitoring and analysis. I would like to know the best approach to configure Splu... See more...
Hi everyone, I'm currently working on integrating Trellix ePolicy Orchestrator (ePO) logs into Splunk for better monitoring and analysis. I would like to know the best approach to configure Splunk to collect and index logs from the Trellix ePO server. Specifically, I’m looking for details on: Recommended methods (e.g., syslog, API, or other tools/add-ons) Any Splunk add-ons or apps that facilitate ePO log ingestion Best practices for configuration and parsing these logs in Splunk Any guidance or references to documentation would be greatly appreciated! Thank you!
Hello. I'm trying to transfer metric collected from Prometheus to my cloud instance.  According to https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html  sh... See more...
Hello. I'm trying to transfer metric collected from Prometheus to my cloud instance.  According to https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html  should use splunk_hec exporter.  Configurtion for OpenTelemetry looks like      receivers: prometheus: config: scrape_configs: - job_name: 'prometheus' scrape_interval: 10s static_configs: - targets: ['localhost:9090'] exporters: splunk_hec: token: "xxxxxxxx" endpoint: "https://http-inputs-xxxx.splunkcloud.com/services/collector" source: "lab" sourcetype: "lab" service: pipelines: metrics: receivers: [prometheus] exporters: [splunk_hec]     but I'm receiving error that splunk_hec is not accepted as an exporter.  error decoding 'exporters': unknown type: "splunk_hec" for id: "splunk_hec" (valid values: [nop otlp kafka zipkin debug otlphttp file opencensus prometheus prometheusremotewrite])   Do you have to use any intermittent solution to achieve this goal?  Thank. Sz  
Good day, I want to join two indexes to show all the email addresses that the user have that signed in.  This queries my mimecast signin logs  index=db_mimecast splunkAccountCode=* mcType=audi... See more...
Good day, I want to join two indexes to show all the email addresses that the user have that signed in.  This queries my mimecast signin logs  index=db_mimecast splunkAccountCode=* mcType=auditLog | dedup user | table _time, user | sort _time desc Lets say it returns a user@domain.com that singed in. I want to then join this to show all the info from  index=collect_identities sourcetype=ldap:query | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity I tried inner join but I do not have anything that match since my results come back as this for my second query identity email extensionAttribute10 extensionAttribute11 first last USurname user@domain.com userT1@domain.com user@domain.com user@domain.com user@another.com user@domain.com user Surname  
We are hosting Splunk enterprise on AWS EC2 instances, the flow goes as follows: ALB>Apache Reverse proxies>ALB>SHC<>Indexers. after a period of times (days mostly) we start to experience 504 gatew... See more...
We are hosting Splunk enterprise on AWS EC2 instances, the flow goes as follows: ALB>Apache Reverse proxies>ALB>SHC<>Indexers. after a period of times (days mostly) we start to experience 504 gateway time-out which disappears when we restart our proxies, and we go for another round and so on. Any clues for how to troubleshoot this, we adjusted the timeouts parameters on the application, and the application loadbalancers but the problem is still persisting.  
Hello Splunkers,    I would like to pass the two base search when input dropdown is set as all, i need to pass a base search, when other values apart from all is selected, it need to pass a diffrent... See more...
Hello Splunkers,    I would like to pass the two base search when input dropdown is set as all, i need to pass a base search, when other values apart from all is selected, it need to pass a diffrent base search. Thanks!
Hi Guys, I have one master list that inculdes all items, and I want to consolidate two other time-related tables into a single chart, as shown in the example below. master list time-related... See more...
Hi Guys, I have one master list that inculdes all items, and I want to consolidate two other time-related tables into a single chart, as shown in the example below. master list time-related table 1 time-related table 2 result chart And could I use the chart to produce the pivot chart in Splunk?  
Hello, We have been facing a weird error suddenly, wherein our production Splunk cloud Enterprise Security Incident Review dashboard suddenly isn't showing the Drill down searches in any of the trig... See more...
Hello, We have been facing a weird error suddenly, wherein our production Splunk cloud Enterprise Security Incident Review dashboard suddenly isn't showing the Drill down searches in any of the triggered notables. For all of them "Something went wrong" message is thrown up. I tried changing the roles to ess_admin, tried with multiple drilldown searches but none helped. I am wondering if this is an app backend problem, but just wanted to make sure I am not missing out on anything before opening a support ticket. Any help would be greatly appreciated.
So I have a lookup file with a complete list of servers and their details like version, owner etc, and an index my_index that gets logs from servers. This is the search I am using right now ... See more...
So I have a lookup file with a complete list of servers and their details like version, owner etc, and an index my_index that gets logs from servers. This is the search I am using right now | inputlookup my_lookup.csv | join type=left server_name [ | tstats count where index=my_index by host | eval reporting="yes"] | eval reporting=if(isnull(reporting),"No","Yes") I want to validate the list by referencing it against the tstats reports and show the whole list of the lookupfile. What I want to know is if this search is accurate, will the subsearch truncate results giving me inaccurate output, is there any alternate way to write this search, Please help.
Good day, Is there a way to join all my rows into one? My simple query    index=collect_identities sourcetype=ldap:query user | dedup email | table email extensionAttribute10 extensionAttribu... See more...
Good day, Is there a way to join all my rows into one? My simple query    index=collect_identities sourcetype=ldap:query user | dedup email | table email extensionAttribute10 extensionAttribute11 first last identity     Shows results as, as I have more than one email email extensionAttribute10 extensionAttribute11 first last identity user@domain.com   user@consultant.com User Surname USurname userT1@domain.com user@domain.com user@domain.com User Surname USurname userT0@domain.com user@domain.com user@domain.com User Surname USurname I want to add a primary key that searches for "user@domain.com" and display all their email addresses that they have in one row.  Example email extensionAttribute10 extensionAttribute11 first last identity email2 email3 user@domain.com user@domain.com user@consultant.com  User Surname USurname userT1@domain.com userT0@domain.com
Hello all, I configured an app and in the asset conf, I added an environment variable "https_proxy", but somehow I see that the action still go out via the proxy, but tries to go directly to the des... See more...
Hello all, I configured an app and in the asset conf, I added an environment variable "https_proxy", but somehow I see that the action still go out via the proxy, but tries to go directly to the destination address, I opened the app code to see the referring to this variables, but I couldn't find it. Can anyone shed light and explain how can I check the referring to those variables? in other apps I manage to use the proxy variable successfully, it only happens to me with AD LDAP app 
Hi There, I have a cluster on MongoDB Atlas that contains my data connected to my application. That cluster produces some logs that can be downloaded in .log format or .gz (compressed format). To que... See more...
Hi There, I have a cluster on MongoDB Atlas that contains my data connected to my application. That cluster produces some logs that can be downloaded in .log format or .gz (compressed format). To query and view my logs easily, I would like to use Splunk  Is there any way to ingest those logs from MongoDB Atlas and into a Splunk instance via API?  If there is any,  could anyone kindly share any documentation or process on how to accomplish this? NB: I can obtain the logs from MongoDB Atlas via a cURL request