All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Creating a dashboard that allows you to select a region which will then retrieve data for only customers in that region. Each customer has their own "index" and the index name is the customer name. I... See more...
Creating a dashboard that allows you to select a region which will then retrieve data for only customers in that region. Each customer has their own "index" and the index name is the customer name. I'd like to avoid a subsearch as it's limited to 10k rows, you can subsearch the lookup though. The region isn't included in the customer index data. ------------------------------ Lookup data set: Region  |   Customer US1            Mcdonalds US2            Macys AU1            Atlassian AU2            Outback ------------------------------ The issue I have run into is when I retrieve the list of customer names from the lookup, the subsearch is limited to 10k rows, there are a ton more rows that need to be included. I created a very inefficient query which I'm unhappy about, hence why I'm here: index="*"  [inputlookup CSS_Customers where Region = $inputregion$ | fields Customer | rename Customer as index] Note: I tried tstats to pull a single field, but ran into an index issue. It could be because our "index" field isn't indexed.
I have an existing database input that is reading from an Oracle database.  Existing Dashboard A uses that database input.  I want to use the same database input on a new Dashboard B, but to do so ... See more...
I have an existing database input that is reading from an Oracle database.  Existing Dashboard A uses that database input.  I want to use the same database input on a new Dashboard B, but to do so I would need to include one additional field in the SELECT statement of the database input.  It wouldn't change how many rows are returned by the query (no select distincts), and no additional joins are needed.  Are there any negative ramifications of doing this, or am I good to go?  We only have a PROD environment, so I want to be extra cautious in making any changes.
I'm creating a new dashboard using an existing database input.  This dashboard will have multiple (7+) panels searching the same database input, with the same filtering criteria, but different groupi... See more...
I'm creating a new dashboard using an existing database input.  This dashboard will have multiple (7+) panels searching the same database input, with the same filtering criteria, but different grouping and aggregating.  I was wondering if there is a better way to do this?  Does this cause 7+ separate reads to the same DBInput when it could be accomplished in one?  Thanks!
All, So my Management Console Health check is flagging my indexers "Local indexing on non-indexer instances". Did I miss something to pass this health check? Indexers don't shouldn't need to outp... See more...
All, So my Management Console Health check is flagging my indexers "Local indexing on non-indexer instances". Did I miss something to pass this health check? Indexers don't shouldn't need to output anything per my understanding? The indexer is correctly assigned the "indexer" role the role in the Management Console setup. Something I missed? Perhaps a .conf file somewhere that also needs updating?   Splunk 9.0 on El8. 
Our Windows Slave comes back with errors in the jenkins-slave.err.log that the host cannot be resolved:       com.splunk.splunkjenkins.utils.LogConsumer run SEVERE: message not delivered:{len... See more...
Our Windows Slave comes back with errors in the jenkins-slave.err.log that the host cannot be resolved:       com.splunk.splunkjenkins.utils.LogConsumer run SEVERE: message not delivered:{length:3173 {"host":"our.host.com","index":"jenkins_console","sourcetype":"text:jenkins","time":"1660230122.365","source":"job} java.net.UnknownHostException: http-inputs-ourhost.splunkcloud.com at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) at java.net.InetAddress.getAllByName0(InetAddress.java:1277) at java.net.InetAddress.getAllByName(InetAddress.java:1193) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at com.splunk.splunkjenkins.utils.MultipleHostResolver.resolve(MultipleHostResolver.java:23) at shaded.splk.org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at shaded.splk.org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at shaded.splk.org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at shaded.splk.org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at shaded.splk.org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at shaded.splk.org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at shaded.splk.org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at shaded.splk.org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at shaded.splk.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72) at shaded.splk.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:221) at shaded.splk.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:165) at shaded.splk.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:140) at com.splunk.splunkjenkins.utils.LogConsumer.run(LogConsumer.java:89)       It seems that the configured http-inputs-ourhost.splunkcloud.com entry is used on the slave which cannot be resolved... Env: Jenkins 2.334 Splunk Plugin 1.9.9 Splunk Plugin Extensions 1.9.9 Windows Slave in Kubernetes Cluster
Hello. I have an index of Akamai logs forwarded to Splunk, and I'm trying to query for origin latency, which is a json object (netPerf.netOriginLatency) in these logs. How can I query through CLI... See more...
Hello. I have an index of Akamai logs forwarded to Splunk, and I'm trying to query for origin latency, which is a json object (netPerf.netOriginLatency) in these logs. How can I query through CLI to return the value of this object for a minute for example (average latency per minute)? First, I tried to query just for the object value (without time filters) with 'spath' as shown below, but it didn't work: ./splunk search 'index=akamai message.fwdHost=someservice.mydomain.com | spath=netPerf.netOriginLatency' How could I do that? Is it possible? Best,
Hello! I am trying to use makeresults + eval inside a sendalert parameters, but it doesn't return what i need. Follow the example:         index=client1 sourcetype=report_case source=... See more...
Hello! I am trying to use makeresults + eval inside a sendalert parameters, but it doesn't return what i need. Follow the example:         index=client1 sourcetype=report_case source=splunk-hf | table action_date case_post_date action_taken arn scheme_case_number client_internal_id uuid acquirer_case_number | sendalert s3_upload param.bucket_name="bucket_name" param.file_format="csv" param.file_name=[|makeresults | eval filename=strftime(now(), "filename-PreviousDay_%Y_%m_%d_%H_%M_%S") | return $filename]           the file is created but with a default name "test_20220811.csv". What am i doing wrong in the search? Thanks
Hi All,   Splunk 101 question .  What are our options if we want to forward OS level logs ( For example: ssh user login/logout activity)  from a Deployment Server to our indexer.   As a DS is a fu... See more...
Hi All,   Splunk 101 question .  What are our options if we want to forward OS level logs ( For example: ssh user login/logout activity)  from a Deployment Server to our indexer.   As a DS is a full Splunk Enterprise instance, it is not recommended to put UF on the same host.    Where do i need to configure to tell it to monitor the OS syslog file also ? Is it /etc/system/local/inputs.conf  ?  If yes, how to maintain this inputs.conf copy for  updates  as i assume we cannot push updates to this file from the same host itself .  Any best practices here ? My DS is currently sending _audit, _introspection logs to the Idx ; which contain info about Splunk platform and not OS. Hope i am clear.   Thank you
Hi All, Can somebody help me start building this alert: Alert on PW Startup Critical Failure Alert should trigger if any events with the following error message are seen.  The impacted host... See more...
Hi All, Can somebody help me start building this alert: Alert on PW Startup Critical Failure Alert should trigger if any events with the following error message are seen.  The impacted hosts should be listed in the alert email. Base Search: index=app_v source=*System.log "Instantiation of bean failed; nested exception is org.springwork.beans.BeanInstantiationException: Could not instantiate bean class [iv.ws.report.pw.ipg.cache.SchedulerJob]: Constructor threw exception" The PW application has not started up successfully following a code deployment or server start.  
It appears the confguration.py for SA-ldapsearch is referencing the deprecated caPath and caCertFile configurations from server.conf   ca_path = os.path.expandvars(get_ssl_configuration_setti... See more...
It appears the confguration.py for SA-ldapsearch is referencing the deprecated caPath and caCertFile configurations from server.conf   ca_path = os.path.expandvars(get_ssl_configuration_setting('caPath', ca_cert_file = get_ssl_configuration_setting('caCertFile', default='')     I've asked support whether that will be updated and if the workaround is to configure the caPath and caCertFile settings.
Hello,  I write this message because i have an issue with SPLUNK UI and SPL search. I'm a new developper and I am discovering SPLUNK UI framework.   Everyhting was fine until now. When i use ... See more...
Hello,  I write this message because i have an issue with SPLUNK UI and SPL search. I'm a new developper and I am discovering SPLUNK UI framework.   Everyhting was fine until now. When i use raw data in the dashboard it works, but when i put a SPL as :  search3 : {          type: 'ds.search',          options: {             query: "index=\"phantom_container\" | dedup id | search severity = \"critical\" | stats count",             queryParameters: {              earliest: "-7d@d",              latest: "now"                   meta: {},                },                             }, Splunk said that a TenantId is required. I don't understand this issue. Can you resolve it or give me a solution please ?   Any help is welcomed    
Hi, I have a series of bar charts and when I hoover each bar, I currently see the count value. What I actually need is the percentage value. Here is my current query and bar chart: ... See more...
Hi, I have a series of bar charts and when I hoover each bar, I currently see the count value. What I actually need is the percentage value. Here is my current query and bar chart:   | inputlookup Migration-Status-All.csv | search Vendor = "Symantec" | eval dummy = 'Migration Comments' | chart count over "Migration Comments" by dummy How can I change my query to show a percentage when hoovering over each bar? Many thanks, Patrick
hello all, i have an app developed on my linux splunk sandbox and it is working fine. after copying it to the deployment server and deploy it to a UF running on linux, it's not running at all. th... See more...
hello all, i have an app developed on my linux splunk sandbox and it is working fine. after copying it to the deployment server and deploy it to a UF running on linux, it's not running at all. the inputs.conf is:   [script://$SPLUNK_HOME/etc/apps/PBNL_getTVlogs/bin/getTVlogs.sh] disabled = false interval = 0 14 * * * index = tvlogs sourcetype = TVlogs [monitor://$SPLUNK_HOME/etc/apps/PBNL_getTVlogs/logs/TVlogs.csv] disabled = false index = tvlogs sourcetype = TVlogs   so what's wrong here? any help is welcome
I am new to splunk and still wokring out the kinks however im wondering as to why i have the iplocation of clients and ect however i want to just select one country in country field however when i se... See more...
I am new to splunk and still wokring out the kinks however im wondering as to why i have the iplocation of clients and ect however i want to just select one country in country field however when i select one it gives me nothing how do i get around this 
Hello Community, We have 2 target groups to route events.(2 indexers, one is ours and other 3rd party) i want to configure Splunk HF to route events which does not contain particular keyword, ( lik... See more...
Hello Community, We have 2 target groups to route events.(2 indexers, one is ours and other 3rd party) i want to configure Splunk HF to route events which does not contain particular keyword, ( like a NOT operation) to one target group and all events to other target group For example below should be my transforms.conf except that i am not sure about the Regex command. transforms.conf [specific_events] REGEX = "NOT ping" DEST_KEY = _TCP_ROUTING FORMAT = specific_event_targetgroup [all_events] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = all_event_targetgroup   I have tried few Regex commands ^(?!.*ping).* and ^((?!ping).)*$ which worked in regex101 and splunk UI search but not in the conf files. Once i have applied these regex commands to conf file, no events were reaching indexers. Can someone help on this?      
Dear Community, I am new to Splunk so apologies for the newbie question: Basic Problem I have a field which holds an Object and I am having difficulties retrieving a value from a specific key w... See more...
Dear Community, I am new to Splunk so apologies for the newbie question: Basic Problem I have a field which holds an Object and I am having difficulties retrieving a value from a specific key within this object. Purpose I am running a search and I want to retrieve two datetime values from two separate keys within a field, find the difference between these 2 datetime values and finally return a list of events where the difference is less than a particular value. I know how to return a table of results based on a simple criteria and can perform datetime manipulations, I just cannot retrieve the actual datetime values needed to make the calculation. *I can successfully store the whole object to a variable using the eval command but cannot extract the value from it. Assumptions The thing I am working with is indeed an Object. I.e. a dictionary style list in the following format {"key1" : "value" , "key2" : "value" , "key2" : "value"} I am attempting to extract the value using the eval command   Any help would be greatly appreciated. Kind regards, Ben
Hello Splunk team, I am trying for a logic to disable the alerts in the particular app while I disable maintenance mode in master app Is this possible in Splunk? Please help me out with this?
Hello everyone, I want to make search that searches events in index1, and if it finds event, search should take field from it, and make search with this field in another one index. If there are 0 e... See more...
Hello everyone, I want to make search that searches events in index1, and if it finds event, search should take field from it, and make search with this field in another one index. If there are 0 events with this field - then alert. It is possible?
Hi, i have some problems with create spl file, which using to integrate into splunk es.
I have installed the Splunk Add-on for Microsoft Office 365 Reporting Web Service 2.0.0  I m getting  requests.exceptions.HTTPError: 403 Client Error: for url: https://reports.office365.com/ecp/r... See more...
I have installed the Splunk Add-on for Microsoft Office 365 Reporting Web Service 2.0.0  I m getting  requests.exceptions.HTTPError: 403 Client Error: for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2022-08-05T14:01:45.002473Z'%20and%20EndDate%20eq%20datetime'2022-08-05T15:01:45.002473Z' we are using  Modern Authentication (OAuth)   as per the doc we have  Office 365 Exchange Online ->ReportingWebService.Read.All   is Exchange Administrator mandatory  ?