All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , Im trying to open my Dashboard Studio in polywall its open white screen ! no dashboard , but the classic its open good what's the problem ?
Hi, I am trying to use Splunk for the first time and I am not able to complete the devtutorial. I successfully created an app called "Dev Tutorial" (instructions).  Then I followed these instru... See more...
Hi, I am trying to use Splunk for the first time and I am not able to complete the devtutorial. I successfully created an app called "Dev Tutorial" (instructions).  Then I followed these instructions and setup an index called "devtutorial" (which I enabled), installed the "Eventgen" app (which appears in my app directory as "SA-Eventgen"), navigated to "Settings >> Data Inputs >> Eventgen" and enabled the "modinput_eventgen" source type, downloaded sample_bundle and changed the index to "devtutorial", I refeshed Splunk by using this link: http://localhost:8000/debug/refresh. But, when I go to my "Dev Tutorial" app and search for "index="devtutorial" ", no events show up. Also when I go to the "SA-Eventgen" app itself, I get no data:   Can I get some help with this please?
I need the output token of a text box to be the true option of a radio button. I have two text inputs Username going to $upn$ and Asset going to $asset$ (Both are * as default) The base search is... See more...
I need the output token of a text box to be the true option of a radio button. I have two text inputs Username going to $upn$ and Asset going to $asset$ (Both are * as default) The base search is index=azuread devicename=$asset$ userPincipalName=$upn$ So this work perfectly allowing filter to user and/or asset  But I want to pull in our VPN logs (with an append so that both show in the same table in time order). The trouble is that our VPN logs only record by asset and are very noisy. so need to be filtered by asset before the append.  But when asset is "*" then everything is displayed, obscuring the azure login detail. I've tried adding a radio button (with the token being $vpn_asset$).  I've set the False option as default returning "This_is_not_a_valid_asset_name" which will not match anything in the VPN logs. I want to set the true option to be $asset$ so that it uses the token from the ASSET text box, When selecting false - the search "index=VPN deviceName=$vpn$" substitutes $vpn$ with "This_is_not_a_valid_asset_name" which is correct, but when selecting true, the token $vpn$ simply gets substituted for $asset$, whereas I would expect it to be substituted with either the contents to the ASSET Text input. Any ideas? The code is something like this (poetic licence is used for simplicity)     input Title="Insert User Principal Name" type=text token=upn default=* input Title="Insert Asset Name" type=text token=asset default=* input Title="Include VPN Logs" type=radio token=vpn false="not_an_asset" true="$asset$" default=false index=azure userPrincipalName="$upn$" userDeviceName="$asset$" |append [search index=VPN deviceName="$vpn$"]     Whilst "Include VPN Logs" is set to false, the deviceName="not_an_asset" will result in zero VPN logs returned. I need this to pass through the asset detail in the asset input box when set to true, therefore the azure logon details will be interspersed with the VPN logs making assessment easier.
Hello Team,   Trying to exclude NULL fields from results to avoid gaps in table.  Currently using this query: <my base search> | fillnull value="NULL" | search NOT NULL |table uid   and th... See more...
Hello Team,   Trying to exclude NULL fields from results to avoid gaps in table.  Currently using this query: <my base search> | fillnull value="NULL" | search NOT NULL |table uid   and the results still table all the NULL spaces and only names them NULL as opposed to being blank. I want to only show the uids of the users. any suggestions how I can get past this?   Thanks!
Hi folks, I have recently been testing out how to ensure the connection between my deployment server and the universal forwarders is secure. I followed the instructions and deployed a new app with s... See more...
Hi folks, I have recently been testing out how to ensure the connection between my deployment server and the universal forwarders is secure. I followed the instructions and deployed a new app with some stanzas to a test windows workstation server class, via deploymentclient.conf to conform to this: [deployment-client] sslVerifyServerCert=true caCertFile=$SPLUNK_HOME/etc/apps/<this apps name>/auth/ca.pem sslCommonNameToCheck = <common name in DS cert> My question is how can I confirm it is connecting securely? Most of the documentation I find describes securing the indexers to forwarders, but not the deployment server to client/forwarder connection.
How to remove duplicate values in a different field |stats count by src dest  
Hi Linux Experts! Need help on a script that I'm working on to log sudo-enabled users. The script that I'm using is below   #!/bin/sh getent passwd | cut -f1 -d: | xargs -L1 sudo -l -U | grep -v '... See more...
Hi Linux Experts! Need help on a script that I'm working on to log sudo-enabled users. The script that I'm using is below   #!/bin/sh getent passwd | cut -f1 -d: | xargs -L1 sudo -l -U | grep -v 'not allowed'   It is a `.sh` file that's ran once a day. The corresponding output is then parsed and massaged by some SEDCMD stuff, not relevant here. This way, I can see which users are able to perform sudo on the machine.  Note: I am aware of the `usersWithLoginPrivs.sh` but this includes users that I'm not interested.  Hence the custom script. If there's another solution you can share, that'd be great. But here's my PROBLEM: linux admins are complaining that they're getting messaged because `splunk` user that runs this script is generating messages for them. And they don't want to get the messages. So, they suggested to append this command at the end of the script:   > /dev/null 2>&1   which I did. However, it does not print output anymore for those Splunk UFs that previously were able to.  Yes, the main solution to this problem is to give `splunk` user permission to run the script. But due to the complexity of our organization, we can't request the same thing across the board.  So, basically, of the thousands of linux servers that we have some can run this script, some cannot. That's currently okay. But to those that cannot, I'd like to modify the script in such a way that it will still work the same but will not produce any error. Is there any alternative?
I currently have splunk forwarder 9.0 installed on my windows 11 computer, sync a folder that has files from OneDrive with the following structure [batch://C:\Users\esnsanma\Documents\OneDrive - Carv... See more...
I currently have splunk forwarder 9.0 installed on my windows 11 computer, sync a folder that has files from OneDrive with the following structure [batch://C:\Users\esnsanma\Documents\OneDrive - Carvajal S.A\ReportesNdd\8001466435\*] disabled = false index = idx_ndd_group sourcetype = st_ndd_congroup crcSalt = <SOURCE> move_policy = sinkhole every time I restart my computer it stops detecting the files, for example I start my computer but it doesn't index the files, it only detects them until I copy and paste the files on the same folder, there is a command to force the indexing of the entire tree of folders
Creating a dashboard that allows you to select a region which will then retrieve data for only customers in that region. Each customer has their own "index" and the index name is the customer name. I... See more...
Creating a dashboard that allows you to select a region which will then retrieve data for only customers in that region. Each customer has their own "index" and the index name is the customer name. I'd like to avoid a subsearch as it's limited to 10k rows, you can subsearch the lookup though. The region isn't included in the customer index data. ------------------------------ Lookup data set: Region  |   Customer US1            Mcdonalds US2            Macys AU1            Atlassian AU2            Outback ------------------------------ The issue I have run into is when I retrieve the list of customer names from the lookup, the subsearch is limited to 10k rows, there are a ton more rows that need to be included. I created a very inefficient query which I'm unhappy about, hence why I'm here: index="*"  [inputlookup CSS_Customers where Region = $inputregion$ | fields Customer | rename Customer as index] Note: I tried tstats to pull a single field, but ran into an index issue. It could be because our "index" field isn't indexed.
I have an existing database input that is reading from an Oracle database.  Existing Dashboard A uses that database input.  I want to use the same database input on a new Dashboard B, but to do so ... See more...
I have an existing database input that is reading from an Oracle database.  Existing Dashboard A uses that database input.  I want to use the same database input on a new Dashboard B, but to do so I would need to include one additional field in the SELECT statement of the database input.  It wouldn't change how many rows are returned by the query (no select distincts), and no additional joins are needed.  Are there any negative ramifications of doing this, or am I good to go?  We only have a PROD environment, so I want to be extra cautious in making any changes.
I'm creating a new dashboard using an existing database input.  This dashboard will have multiple (7+) panels searching the same database input, with the same filtering criteria, but different groupi... See more...
I'm creating a new dashboard using an existing database input.  This dashboard will have multiple (7+) panels searching the same database input, with the same filtering criteria, but different grouping and aggregating.  I was wondering if there is a better way to do this?  Does this cause 7+ separate reads to the same DBInput when it could be accomplished in one?  Thanks!
All, So my Management Console Health check is flagging my indexers "Local indexing on non-indexer instances". Did I miss something to pass this health check? Indexers don't shouldn't need to outp... See more...
All, So my Management Console Health check is flagging my indexers "Local indexing on non-indexer instances". Did I miss something to pass this health check? Indexers don't shouldn't need to output anything per my understanding? The indexer is correctly assigned the "indexer" role the role in the Management Console setup. Something I missed? Perhaps a .conf file somewhere that also needs updating?   Splunk 9.0 on El8. 
Our Windows Slave comes back with errors in the jenkins-slave.err.log that the host cannot be resolved:       com.splunk.splunkjenkins.utils.LogConsumer run SEVERE: message not delivered:{len... See more...
Our Windows Slave comes back with errors in the jenkins-slave.err.log that the host cannot be resolved:       com.splunk.splunkjenkins.utils.LogConsumer run SEVERE: message not delivered:{length:3173 {"host":"our.host.com","index":"jenkins_console","sourcetype":"text:jenkins","time":"1660230122.365","source":"job} java.net.UnknownHostException: http-inputs-ourhost.splunkcloud.com at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) at java.net.InetAddress.getAllByName0(InetAddress.java:1277) at java.net.InetAddress.getAllByName(InetAddress.java:1193) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at com.splunk.splunkjenkins.utils.MultipleHostResolver.resolve(MultipleHostResolver.java:23) at shaded.splk.org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at shaded.splk.org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at shaded.splk.org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at shaded.splk.org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at shaded.splk.org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at shaded.splk.org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at shaded.splk.org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at shaded.splk.org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at shaded.splk.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72) at shaded.splk.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:221) at shaded.splk.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:165) at shaded.splk.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:140) at com.splunk.splunkjenkins.utils.LogConsumer.run(LogConsumer.java:89)       It seems that the configured http-inputs-ourhost.splunkcloud.com entry is used on the slave which cannot be resolved... Env: Jenkins 2.334 Splunk Plugin 1.9.9 Splunk Plugin Extensions 1.9.9 Windows Slave in Kubernetes Cluster
Hello. I have an index of Akamai logs forwarded to Splunk, and I'm trying to query for origin latency, which is a json object (netPerf.netOriginLatency) in these logs. How can I query through CLI... See more...
Hello. I have an index of Akamai logs forwarded to Splunk, and I'm trying to query for origin latency, which is a json object (netPerf.netOriginLatency) in these logs. How can I query through CLI to return the value of this object for a minute for example (average latency per minute)? First, I tried to query just for the object value (without time filters) with 'spath' as shown below, but it didn't work: ./splunk search 'index=akamai message.fwdHost=someservice.mydomain.com | spath=netPerf.netOriginLatency' How could I do that? Is it possible? Best,
Hello! I am trying to use makeresults + eval inside a sendalert parameters, but it doesn't return what i need. Follow the example:         index=client1 sourcetype=report_case source=... See more...
Hello! I am trying to use makeresults + eval inside a sendalert parameters, but it doesn't return what i need. Follow the example:         index=client1 sourcetype=report_case source=splunk-hf | table action_date case_post_date action_taken arn scheme_case_number client_internal_id uuid acquirer_case_number | sendalert s3_upload param.bucket_name="bucket_name" param.file_format="csv" param.file_name=[|makeresults | eval filename=strftime(now(), "filename-PreviousDay_%Y_%m_%d_%H_%M_%S") | return $filename]           the file is created but with a default name "test_20220811.csv". What am i doing wrong in the search? Thanks
Hi All,   Splunk 101 question .  What are our options if we want to forward OS level logs ( For example: ssh user login/logout activity)  from a Deployment Server to our indexer.   As a DS is a fu... See more...
Hi All,   Splunk 101 question .  What are our options if we want to forward OS level logs ( For example: ssh user login/logout activity)  from a Deployment Server to our indexer.   As a DS is a full Splunk Enterprise instance, it is not recommended to put UF on the same host.    Where do i need to configure to tell it to monitor the OS syslog file also ? Is it /etc/system/local/inputs.conf  ?  If yes, how to maintain this inputs.conf copy for  updates  as i assume we cannot push updates to this file from the same host itself .  Any best practices here ? My DS is currently sending _audit, _introspection logs to the Idx ; which contain info about Splunk platform and not OS. Hope i am clear.   Thank you
Hi All, Can somebody help me start building this alert: Alert on PW Startup Critical Failure Alert should trigger if any events with the following error message are seen.  The impacted host... See more...
Hi All, Can somebody help me start building this alert: Alert on PW Startup Critical Failure Alert should trigger if any events with the following error message are seen.  The impacted hosts should be listed in the alert email. Base Search: index=app_v source=*System.log "Instantiation of bean failed; nested exception is org.springwork.beans.BeanInstantiationException: Could not instantiate bean class [iv.ws.report.pw.ipg.cache.SchedulerJob]: Constructor threw exception" The PW application has not started up successfully following a code deployment or server start.  
It appears the confguration.py for SA-ldapsearch is referencing the deprecated caPath and caCertFile configurations from server.conf   ca_path = os.path.expandvars(get_ssl_configuration_setti... See more...
It appears the confguration.py for SA-ldapsearch is referencing the deprecated caPath and caCertFile configurations from server.conf   ca_path = os.path.expandvars(get_ssl_configuration_setting('caPath', ca_cert_file = get_ssl_configuration_setting('caCertFile', default='')     I've asked support whether that will be updated and if the workaround is to configure the caPath and caCertFile settings.
Hello,  I write this message because i have an issue with SPLUNK UI and SPL search. I'm a new developper and I am discovering SPLUNK UI framework.   Everyhting was fine until now. When i use ... See more...
Hello,  I write this message because i have an issue with SPLUNK UI and SPL search. I'm a new developper and I am discovering SPLUNK UI framework.   Everyhting was fine until now. When i use raw data in the dashboard it works, but when i put a SPL as :  search3 : {          type: 'ds.search',          options: {             query: "index=\"phantom_container\" | dedup id | search severity = \"critical\" | stats count",             queryParameters: {              earliest: "-7d@d",              latest: "now"                   meta: {},                },                             }, Splunk said that a TenantId is required. I don't understand this issue. Can you resolve it or give me a solution please ?   Any help is welcomed    
Hi, I have a series of bar charts and when I hoover each bar, I currently see the count value. What I actually need is the percentage value. Here is my current query and bar chart: ... See more...
Hi, I have a series of bar charts and when I hoover each bar, I currently see the count value. What I actually need is the percentage value. Here is my current query and bar chart:   | inputlookup Migration-Status-All.csv | search Vendor = "Symantec" | eval dummy = 'Migration Comments' | chart count over "Migration Comments" by dummy How can I change my query to show a percentage when hoovering over each bar? Many thanks, Patrick