All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, I am facing a problem with the drop downs. I have 2 drop downs one is a group and other one is subgroup. The first drop down has a list of the group names. In the second drop down it wil... See more...
Hi everyone, I am facing a problem with the drop downs. I have 2 drop downs one is a group and other one is subgroup. The first drop down has a list of the group names. In the second drop down it will show the sub groups of the groups we selected from the first one. I want to include the option "All" in the second drop down. I want to pass only the values of those sub groups of respective group selected to the queries. But if i use "*" for all it is giving all the subgroups together irrespective of the group selected. Anyone knows how to solve this?
Hi splunkers, i know how we can restrict users from export data in splunk web.  Does anyone happens to know , how can we restrict users from export data via RestAPI, CLI ?
Hi All, Plesae help me with the below, How to integrate SAAS app logs into splunk? Miro app to be integrated with Splunk but we dont want 3rd party apps, we need only splunk provided apps to in... See more...
Hi All, Plesae help me with the below, How to integrate SAAS app logs into splunk? Miro app to be integrated with Splunk but we dont want 3rd party apps, we need only splunk provided apps to integrate. Is there any other method that can be followed to ingest the SAAS app logs?  
I have the following table that I would like to summarize as total logins and total token creations by creating a new table with two rows  showing  CLIENT_LOGIN + LOGIN  and CODE_TO_TOKEN + REFRESH... See more...
I have the following table that I would like to summarize as total logins and total token creations by creating a new table with two rows  showing  CLIENT_LOGIN + LOGIN  and CODE_TO_TOKEN + REFRESH_TOKEN how do I sum two rows?   Thanks CLIENT_LOGIN 81392 CLIENT_LOGIN_ERROR 290 CODE_TO_TOKEN 2984 CODE_TO_TOKEN_ERROR 13 CUSTOM_REQUIRED_ACTION_ERROR 3 INTROSPECT_TOKEN 33 LOGIN 10559 LOGIN_ERROR 1240 LOGOUT 2 REFRESH_TOKEN 51 REFRESH_TOKEN_ERROR 126
I'm looking at designing a Splunk data catalogue that captures all source types (and metadata) that are currently being ingested, so that we can quickly see what the current state of the workspace. E... See more...
I'm looking at designing a Splunk data catalogue that captures all source types (and metadata) that are currently being ingested, so that we can quickly see what the current state of the workspace. E.g. a customer who wants access to event X can use the catalogue to check that source type Y exists already. Has anyone done something similar to this or have suggestions? I'm quite new to Splunk but it seemed like it could be a common 'nice to have' for Splunk users.  Thanks. 
Hi, In the top menu at the Splunk level (black bar) there is a `Find` text box.  Is the contents of this made available as a token, such that I can use it in a dashboard ?   I want to change... See more...
Hi, In the top menu at the Splunk level (black bar) there is a `Find` text box.  Is the contents of this made available as a token, such that I can use it in a dashboard ?   I want to change how this field works within certain applications, so it can be used within a search in my dashboard and not used by the Splunk Application its self. tia
I'm using Splunk Enterprise 8.2.5 on Windows and using deployment server to push apps. There is currently no indexer configured in /etc/system/local/outputs.conf as we do all this in the app.  Our se... See more...
I'm using Splunk Enterprise 8.2.5 on Windows and using deployment server to push apps. There is currently no indexer configured in /etc/system/local/outputs.conf as we do all this in the app.  Our security team want our forwarders to start shipping up the events over TLS 1.2. As a quick test I have the following with non-deployment server app: Indexer listening for TLS 1.2 using my PKI signed cert on TCP port 9998 Forwarder using default server.pem cert and verifying the indexer cert This works fine but now I have the issue of how to deploy apps from deployment server. If I use the default, a self-signed or PKI signed client cert at the forwarder I must secure the private key with a password and specify that password in the outputs.conf. Therefore if specifying the indexer for a given app (not all apps have the same indexer!) I need to specify the password for that app in the <app>/outputs.conf file at the deployment server. I tested this but one issue is the password does not get encrypted on the deployment server or the target UF. I'd realistically needs the same client certificate on all forwarders to make this manageable. Should I be defining all indexers outside of deployment server apps in /etc/system/local/outputs.conf and then routing to them in the <app>/outputs.conf instead? Note: Although it states here that if the password is specified in a conf file outside /etc/system/local/ it will not be encrypted I have tested and it is! This whole area of config is very confusing IMO  
Using Splunk Enterprise 8.2.4 on Windows and trying to configure my forwarders to use SSL to forward events to my indexers. Client certificate verification is not enabled on the indexer. Reading the ... See more...
Using Splunk Enterprise 8.2.4 on Windows and trying to configure my forwarders to use SSL to forward events to my indexers. Client certificate verification is not enabled on the indexer. Reading the guide here it states some config aspects that don't seem to be true from testing: sslPassword = <password> * The password associated with the Certificate Authority certificate (CAcert). Is this  not the password for the client cert (server.pem by default) rather than CACert? Also, is states: useSSL = <true|false|legacy> * Whether or not the forwarder uses SSL to connect to the receiver, or relies on the 'clientCert' setting to be active for SSL connections. * You do not need to set 'clientCert' if 'requireClientCert' is set to "false" on the receiver. This appears to indicate that you can use SSL without a client certificate (which would be great!). However, if I simply add  useSSL=true to my forwarder outputs.conf the SSL connection does not come up and the following is showing splunkd.log indicating it is looking for a client certificate file: 03-30-2022 23:39:47.933 +0100 ERROR SSLCommon [31888 parsing] - Can't read certificate file errno=33558528 error:02001000:system library:fopen:system library My outputs.conf is as follows: [tcpout:test-ssl-1] disabled = 0 server = indexer1.mydomain.com:9998 useSSL = true useClientSSLCompression = true sslVerifyServerCert = false This outputs.conf does work: [tcpout:test-ssl-1] disabled = 0 server = indexer1.mydomain.com:9998 useSSL = true useClientSSLCompression = true sslVerifyServerCert = false sslCommonNameToCheck = indexer1 sslAltNameToCheck = indexer1.mydomain.com sslRootCAPath = C:\Program Files\SplunkUniversalForwarder\etc\auth\MyPKIChain.pem  I have no idea why the 2nd config works! sslVerifyServerCert  is false.   
How to manually download the IP database and add it to Splunk User Behavior Analytics (UBA)?
I need to configure Splunk Enterprise using the reporting and notification tools to create a report with notification of the following events: Loss of communication with hosts and devices Logs n... See more...
I need to configure Splunk Enterprise using the reporting and notification tools to create a report with notification of the following events: Loss of communication with hosts and devices Logs no longer being collected. How would I go about crafting a search for these requirements? 
Hi! I can't seem to figure out how to get a count of each operation in a document like below:     { [-] request_id: 12345 revision: 123 other_field: stuff my_precious: { [-] 16486... See more...
Hi! I can't seem to figure out how to get a count of each operation in a document like below:     { [-] request_id: 12345 revision: 123 other_field: stuff my_precious: { [-] 1648665400.774453: { [-] keys: [ [-] key:key1 ] op: operation_1 } 1648665400.7817056: { [-] keys: [ [-] key:key2 ] op: operation_2 } 1648665400.7847242: { [-] keys: [ [-] key:key4 ] op: operation_1 } 1648665400.7886434: { [-] keys: [ [-] key:key5 ] op: operation_3 } 1648665400.7932374: { [-] keys: [ [-] key:key3 ] op: operation_2 }    I want to be able to see the count of each operation. For example, the above would yield: operation_1: 2 operation_2: 2 operation_3: 1   I've tried with the following rex, which is unreliable tbh, as there could be other documents with " op: ". But not even the following works...   | rex (?<opr>"(?<= op: )\w+") |stats count by opr |   Any help is appreciated!
I have a Splunk Enterprise cluster (version 8.1.3) that for some reason, is not returning any results for indexed real-time searches, but regular searches and regular real-time searches work just fin... See more...
I have a Splunk Enterprise cluster (version 8.1.3) that for some reason, is not returning any results for indexed real-time searches, but regular searches and regular real-time searches work just fine. When I have my search app configured with indexed_realtime_use_by_default = false, my real time searches return fine. When indexed_realtime_use_by_default is true, it returns no data for the same search. If I change the search from a real time search to any sort of historical search, I also get search results, including over the same time period my real time search is running. Does anyone have any suggestions what I should look into?
Hello Hello I have the following Splunk search syntax which returns me detailed log connection for a all user to the VPN concentrator (F5) in the past 90 days. I would need to do the same search o... See more...
Hello Hello I have the following Splunk search syntax which returns me detailed log connection for a all user to the VPN concentrator (F5) in the past 90 days. I would need to do the same search only for 30 login_name users from csv, how can i build the search syntax? my actual search for all user with " | search login_name=*  "  is :       index=index-f5 sourcetype="f5:bigip:apm:syslog" ((New session) OR (Username) OR (Session deleted)) | transaction session_id startswith="New session" endswith="Session deleted"| rex field=_raw "Username '(?<login_name>.\\S+)'" | search login_name=* | eval sessione_time=tostring(duration, "duration")| table _time login_name session_id session_time          
I found how-to links for generating CSR's for Inter-Splunk communication and for the Splunk Web site to be able to use 3rd party generated certs.   However, the processes are almost identical, so I... See more...
I found how-to links for generating CSR's for Inter-Splunk communication and for the Splunk Web site to be able to use 3rd party generated certs.   However, the processes are almost identical, so I'm wondering if I need to do this process twice so that each use case get their own cert.  Or if I only have to do it once and use the single resulting Cert for both use cases since technically the common name would be the same for both? I couldn't find anything in the documentation that stated this either way. https://docs.splunk.com/Documentation/Splunk/8.2.5/Security/Howtogetthird-partycertificates https://docs.splunk.com/Documentation/Splunk/8.2.5/Security/Getthird-partycertificatesforSplunkWeb  
I've below search:   | tstats summariesonly=true count, sum(All_Traffic.bytes) as total_bytes, sum(All_Traffic.packets) as total_packets from datamodel=Network_Traffic by All_Traffic.src_ip, All_... See more...
I've below search:   | tstats summariesonly=true count, sum(All_Traffic.bytes) as total_bytes, sum(All_Traffic.packets) as total_packets from datamodel=Network_Traffic by All_Traffic.src_ip, All_Traffic.dest_ip, All_Traffic.action | rename "All_Traffic.*" as * | stats sum(total_bytes) as total_bytes, sum(total_packets) as total_packets by src_ip dest_ip action | sort 0 -total_bytes | streamstats count as count by action | search count<=20     The purpose of using the last 3 lines with sort and streamstats is I want the top 20 results by total_bytes from each value of the action field. The only problem with this solution is that streamstats has a limit of 10000 in limits.conf. Do we have any better solution for this?
Can anyone help me figure out how to export a Dashboard Studio page into a multipage pdf?  Currently if i try to export a long dashboard it will shrink everything into one page and not detect what is... See more...
Can anyone help me figure out how to export a Dashboard Studio page into a multipage pdf?  Currently if i try to export a long dashboard it will shrink everything into one page and not detect what is text and what is an image.   Simple XML dashboards would lose formatting but would at least break the export into multiple pages.
Hi, I have tried many different ways to get a match with a like using a token to a string to set and unset a different set of tokens but I just cant seem to be able to meet the condition eventough ... See more...
Hi, I have tried many different ways to get a match with a like using a token to a string to set and unset a different set of tokens but I just cant seem to be able to meet the condition eventough I know I am selecting a click.value (which gets saved into a token) and that token value contains the string that I am using in the like command. What am I doing wrong? please help. <chart> <search> <query>index=car | dedup run_id | top limit=100 sourcetype | search sourcetype=$form.car_type$</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="charting.chart">column</option> <drilldown> <condition> <set token="show_panel">true</set> <set token="form.car_type">$click.value$</set> <set token="clickedfixture">$click.value$</set> </condition> <condition match="like($form.car_type$,&quot;%ford%&quot;)"> <set token="carford">true</set> <unset token="data_entry"></unset> <unset token="attachment"></unset> </condition> </drilldown> </chart>
Hi, im having issues with ITSI Glass Table. We have created a layout and set primary data and the data was there and can be visualise. After complete all the configuration, we click save and refresh ... See more...
Hi, im having issues with ITSI Glass Table. We have created a layout and set primary data and the data was there and can be visualise. After complete all the configuration, we click save and refresh the page, all the data configuration become invalid (which means its not being visualise, become null data). Then we try to redo the data cofiguration, save and refresh the page, the data will become null value.  I tried to clone the glass table and reconfigure the data, it still not working. It's seem like the apps are having bugs or something else. Is this one of the frequent issue for ITSI? Is there anything I can do to resolve this issue? Please assist. 
I have a lookup file that I am generating with a query.  The query results in ~59,000 rows currently. If I run the query in the free form Splunk search then the CSV file is populated with all 59,00... See more...
I have a lookup file that I am generating with a query.  The query results in ~59,000 rows currently. If I run the query in the free form Splunk search then the CSV file is populated with all 59,000+ entries. But if I schedule that query to run via a report overnight it truncates to 50,000 entries in the CSV file.  What I'm trying to reconcile about the scheduled report is: 1. Under View Recent it took 29s to run so it finished in under any 60s limit:   00:00:29 2. Under View Recent it says it found 59,633 rows for a size of 8.88MB: 3. The Job also says it finished and returned 59,633 results in 28.612 seconds I've seen a few questions around the 50k limit and stanzas that can increase it. But my questions are: 1. Nothing in the View Recent or Job warns that it has truncated the results. 2. Why does scheduling the report diff in limitations from running it in free form search?  
I have a query that calculates the daily availability percentages of a given service for a set of hosts and is used to create a multi-series line chart in a Splunk dashboard.  My ps.sh is running eve... See more...
I have a query that calculates the daily availability percentages of a given service for a set of hosts and is used to create a multi-series line chart in a Splunk dashboard.  My ps.sh is running every 1800 seconds (30 minutes) on my Splunk forwarders, so I assume that it has run a total of 48 times on any given day to calculate the availability in an eval.  The problem is that on the current date, the ps.sh hasn't run all 48 times yet, so I can't get a valid calculation for the current date.  However, if I was able to check if the date in question was the current date, then calculate the number of seconds that have elapsed since the nearest midnight, I could divide that figure by 1800 to figure out the total number of times ps.sh would've run so far that day (hopefully I'm not overcomplicating this).  To illustrate, here's my query with the pseudo-code of desired logic in it using rhnsd as an example process:   index=os host="my-db-*" sourcetype=ps rhnsd | timechart span=1d count by host | untable _time host count | addinfo | eval availability=if(<date is current date>,count/floor((info_max_time-<nearest midnight time>)/1800)*100,if(count>=48,100,count/48*100)) | rename _time as Date host as Host availability as Availability | fieldformat Date = strftime(Date, "%m/%d/%Y") | xyseries Date Host Availability   Any help I could get with completing the above eval would be greatly appreciated, or if I'm overcomplicating this, any alternative methodologies would be more than welcome.