All Topics

Top

All Topics

Hi,   just wondered if oracle cloud had tagging to onboard data like AWS does for Splunk like this:   splunk add monitor /var/log/secure   thanks
Hi Team  How do i install Splunk UF on AIX system  I refer to the below link, but still not sure ... Link  - Install a *nix universal forwarder - Splunk Documentation Thanks 
I am trying to create a text input in splunk dashboard that should ignore the ticket numbers entered by the user in the text box while running the query and If user doesn't input anything in that tex... See more...
I am trying to create a text input in splunk dashboard that should ignore the ticket numbers entered by the user in the text box while running the query and If user doesn't input anything in that text box then by default it should search all tickets. I tried a few ways to achieve this through eval, makeresults, etc. but no luck in getting it to work. Any ideas on how can i achieve this functionality ? <form version="1.1" theme="light"> <label>TEST</label> <search id="tickets"> <query> index=tickets earliest=-1d latest=now | eval search_ticket=if(len("$ticket_number$")=0, "ticket_number=*", "NOT ticket_number IN ($ticket_number$)") </query> </search> <fieldset submitButton="true" autoRun="false"> <input type="text" token="ticket_number"> <label>ticket_number</label> </input> <row> <panel> <table> <title>Results</title> <search base="tickets"> <query>| search $search_ticket$|table ticket_number</query> </search> </table> </panel> </row> </form>
We are ingesting large volume of network data and would like to use tstats to make the searches faster.  The query  index=myindex is returning results as expected, but when I run a basic tstats li... See more...
We are ingesting large volume of network data and would like to use tstats to make the searches faster.  The query  index=myindex is returning results as expected, but when I run a basic tstats like | tstats count where index=myindex returns zero results. What could be the cause?   Attempted also to use : | tstats count where index=federated:myindex but it did not help.  
This is my first time using Splunk in my environment, we have chosen the Splunk cloud platform. Since, it is my first time, how can I determine the system requirements for a server (Physical or Virt... See more...
This is my first time using Splunk in my environment, we have chosen the Splunk cloud platform. Since, it is my first time, how can I determine the system requirements for a server (Physical or Virtual) before installing the Universal Forwarder?    
What is the reason that Splunk UBA Kafka  give me this error, how can i fix this Kafka topics are not receiving events and Kafka Broker
Ok maybe it is too much Splunk today.  Whatever it is I can not for the life of me remember how to do this. I am doing a basic search on some logs.  I want to show the search term in the table resul... See more...
Ok maybe it is too much Splunk today.  Whatever it is I can not for the life of me remember how to do this. I am doing a basic search on some logs.  I want to show the search term in the table results.  The term is being queried out of the _raw   index=myindex sourcetype=mystuff Environment=thisone "THE_TERM" | top Environment by userid | table Environment, userid   Where and how to I add "THE_TERM" to the table results?
Hello, we have 2 Splunk platforms and we are using _TCP_ROUTING to forward logs. System logs from 1st platform indexers are currently logged on themself.   We want to also receive system logs fro... See more...
Hello, we have 2 Splunk platforms and we are using _TCP_ROUTING to forward logs. System logs from 1st platform indexers are currently logged on themself.   We want to also receive system logs from  indexers of the 1st platform on our 2nd platform however there is no default tcpout group on 1st platform indexers.   So should we create default outputs.conf on 1st platform indexers to continue indexing local system logs?   Thanks for your help.  
Hi All, Thanks for your time, I have a query for getting the number of errors for each client/customer, api_name,time etc index=index_api | stats count by customer,api_name, _time   If i have... See more...
Hi All, Thanks for your time, I have a query for getting the number of errors for each client/customer, api_name,time etc index=index_api | stats count by customer,api_name, _time   If i have the dataset like below - how do i take the snapshot of it and compare in the next 30 minute dataset Client/customer   api_name            _time                                   count Abc                            Validation_V2   2024 oct 29 10.30             10 Xyz                             Testing_V2         2024 oct 29 10.30             15 TestCust                   Testing_V3         2024 oct 29 10.30            20   assuming these are for the last 30 mins...  when i get to the next run say after 30 mins  i want to see if the same dataset is repeated so that i can get the consecutive error count any guidance or helpful suggestions....     
Hello, I need help in creating a search query to filter info showing just our logfile with same error line for all rows. This error code also appear in other days on the same logfile but I don't wa... See more...
Hello, I need help in creating a search query to filter info showing just our logfile with same error line for all rows. This error code also appear in other days on the same logfile but I don't want that to show up. If no other info except this error shows up in the logfile, our app is failing and I need to catch that. c.q.s.c.StoreHourSyncRestController : *** Sync Busy *** Please assist. Thank you! Andie Medalla
Hello,   I would like send two different index data one to indexer and other to Intermediate forwarder. How the configuration needs to be updated in Universal Forwarder ?  Thanks
Good morning, I need help. I have three SearchHead servers. Two of them (SH1 and SH2) are presenting the "_time" field correctly. However, the other server (SH3) is presenting the "_time" field wit... See more...
Good morning, I need help. I have three SearchHead servers. Two of them (SH1 and SH2) are presenting the "_time" field correctly. However, the other server (SH3) is presenting the "_time" field with three hours more than the other two (SH1 and SH2). How do I resolve this? SH2 - Normal SH3 - With three more hours. (With same search range)  
We have an on-prem Splunk-Enterprise Version: 9.0.4.1 We updated IDP url in the SAML configuration and after uploading new IDP certificate to all search heads under .../auth/idpCerts, it was working ... See more...
We have an on-prem Splunk-Enterprise Version: 9.0.4.1 We updated IDP url in the SAML configuration and after uploading new IDP certificate to all search heads under .../auth/idpCerts, it was working for about an hour and then it stopped working getting an error: "Verification of SAML assertion using IDP's cert failed. Unknown signer of SAML response". We logged in to the search heads It was noticed that the idpCert.pem is breaking authentication because the file idpCert.pem updated and we are currently investigating if this is a system related issue. Is this a known issue?
Hi everyone, I'm currently working on integrating Trellix ePolicy Orchestrator (ePO) logs into Splunk for better monitoring and analysis. I would like to know the best approach to configure Splu... See more...
Hi everyone, I'm currently working on integrating Trellix ePolicy Orchestrator (ePO) logs into Splunk for better monitoring and analysis. I would like to know the best approach to configure Splunk to collect and index logs from the Trellix ePO server. Specifically, I’m looking for details on: Recommended methods (e.g., syslog, API, or other tools/add-ons) Any Splunk add-ons or apps that facilitate ePO log ingestion Best practices for configuration and parsing these logs in Splunk Any guidance or references to documentation would be greatly appreciated! Thank you!
Hello. I'm trying to transfer metric collected from Prometheus to my cloud instance.  According to https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html  sh... See more...
Hello. I'm trying to transfer metric collected from Prometheus to my cloud instance.  According to https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html  should use splunk_hec exporter.  Configurtion for OpenTelemetry looks like      receivers: prometheus: config: scrape_configs: - job_name: 'prometheus' scrape_interval: 10s static_configs: - targets: ['localhost:9090'] exporters: splunk_hec: token: "xxxxxxxx" endpoint: "https://http-inputs-xxxx.splunkcloud.com/services/collector" source: "lab" sourcetype: "lab" service: pipelines: metrics: receivers: [prometheus] exporters: [splunk_hec]     but I'm receiving error that splunk_hec is not accepted as an exporter.  error decoding 'exporters': unknown type: "splunk_hec" for id: "splunk_hec" (valid values: [nop otlp kafka zipkin debug otlphttp file opencensus prometheus prometheusremotewrite])   Do you have to use any intermittent solution to achieve this goal?  Thank. Sz  
Good day, I want to join two indexes to show all the email addresses that the user have that signed in.  This queries my mimecast signin logs  index=db_mimecast splunkAccountCode=* mcType=audi... See more...
Good day, I want to join two indexes to show all the email addresses that the user have that signed in.  This queries my mimecast signin logs  index=db_mimecast splunkAccountCode=* mcType=auditLog | dedup user | table _time, user | sort _time desc Lets say it returns a user@domain.com that singed in. I want to then join this to show all the info from  index=collect_identities sourcetype=ldap:query | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity I tried inner join but I do not have anything that match since my results come back as this for my second query identity email extensionAttribute10 extensionAttribute11 first last USurname user@domain.com userT1@domain.com user@domain.com user@domain.com user@another.com user@domain.com user Surname  
We are hosting Splunk enterprise on AWS EC2 instances, the flow goes as follows: ALB>Apache Reverse proxies>ALB>SHC<>Indexers. after a period of times (days mostly) we start to experience 504 gatew... See more...
We are hosting Splunk enterprise on AWS EC2 instances, the flow goes as follows: ALB>Apache Reverse proxies>ALB>SHC<>Indexers. after a period of times (days mostly) we start to experience 504 gateway time-out which disappears when we restart our proxies, and we go for another round and so on. Any clues for how to troubleshoot this, we adjusted the timeouts parameters on the application, and the application loadbalancers but the problem is still persisting.  
Hello Splunkers,    I would like to pass the two base search when input dropdown is set as all, i need to pass a base search, when other values apart from all is selected, it need to pass a diffrent... See more...
Hello Splunkers,    I would like to pass the two base search when input dropdown is set as all, i need to pass a base search, when other values apart from all is selected, it need to pass a diffrent base search. Thanks!
Hi Guys, I have one master list that inculdes all items, and I want to consolidate two other time-related tables into a single chart, as shown in the example below. master list time-related... See more...
Hi Guys, I have one master list that inculdes all items, and I want to consolidate two other time-related tables into a single chart, as shown in the example below. master list time-related table 1 time-related table 2 result chart And could I use the chart to produce the pivot chart in Splunk?  
Hello, We have been facing a weird error suddenly, wherein our production Splunk cloud Enterprise Security Incident Review dashboard suddenly isn't showing the Drill down searches in any of the trig... See more...
Hello, We have been facing a weird error suddenly, wherein our production Splunk cloud Enterprise Security Incident Review dashboard suddenly isn't showing the Drill down searches in any of the triggered notables. For all of them "Something went wrong" message is thrown up. I tried changing the roles to ess_admin, tried with multiple drilldown searches but none helped. I am wondering if this is an app backend problem, but just wanted to make sure I am not missing out on anything before opening a support ticket. Any help would be greatly appreciated.