All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are running splunk 9.0.5 We want to add an index to the default indexes for a user role, but the index does not show up in the list of indexes in the "Edit User Role" window, tab "Indexes" on th... See more...
We are running splunk 9.0.5 We want to add an index to the default indexes for a user role, but the index does not show up in the list of indexes in the "Edit User Role" window, tab "Indexes" on the search head There is data in the index and we do see the index in the monitoring console under Indexing / Index Detail:Deployment We did also add the following to the /opt/splunk/etc/system/local/server.conf on the search head : [introspection:distributed-indexes] disabled = false (And restarted the splunk service on the search head afterwards) The index was created earlier (before 9.0.5) via the master node file /opt/splunk/etc/master-apps/_cluster/local/indexes.conf (now moved to manager_apps) A push of the bundle did not make any changes (peers already had the correct version) What else could be the issue here ?
The solution in Solved: Re: How do you make a table that reduces in height... - Splunk Community does not work anymore, at least not for Splunk Cloud version 9.0.2303.201. Hopefully it's just a simp... See more...
The solution in Solved: Re: How do you make a table that reduces in height... - Splunk Community does not work anymore, at least not for Splunk Cloud version 9.0.2303.201. Hopefully it's just a simple tweak that's needed? **UPDATE** I'm adding a screenshot and the code that I use that used to work:       <dashboard version="1.1" theme="dark"> <label>TEMP GABS test css</label> <row> <panel> <table id="table1"> <search> <done> <condition match="'job.resultCount' != 0"> <set token="table1TableHeightCSS"></set> <set token="table1TableAlertCSS"></set> </condition> <condition match="'job.resultCount' == 0"> <set token="table1TableHeightCSS">height: 50px !important;</set> <set token="table1TableAlertCSS">position:relative; top: -130px !important;</set> </condition> </done> <query>| stats count | where count&gt;0</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <table> <search> <query>| stats count </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row depends="$never_set$"> <panel> <html> <style> #table1 .splunk-table{ $table1TableHeightCSS$ } #table1 .alert-info{ $table1TableAlertCSS$ } </style> </html> </panel> </row> </dashboard>       **UPDATE2** If I go in and out of the edit source mode, it starts to work: But if I just reload the dashboard from scratch it doesn't. I've cleared all my browser's data (cookies, cache, etc) and restarted my browser (edge version 114.0.1823.51) and can confirm the same behaviour.
Hi Team,   I wanted to check my SQL database server is hosted on azure. Is there a way to onboard logs to splunk. Can we use DB connect app for Servers hosted on azure. Will it work.   Plea... See more...
Hi Team,   I wanted to check my SQL database server is hosted on azure. Is there a way to onboard logs to splunk. Can we use DB connect app for Servers hosted on azure. Will it work.   Please let me know if there is any other alternative solution for this .
Hello! I want to know how to count numbers of field values. Currently I have two fields, something like: User - Anna Class - Math, Science, English The “Class" field is a multivalued field, ... See more...
Hello! I want to know how to count numbers of field values. Currently I have two fields, something like: User - Anna Class - Math, Science, English The “Class" field is a multivalued field, and i want the output to be something like this: User - Anna Class - Math, Science, English Class_Number - 3   Is there an easy way to count how many values are in a multivalued field and show it? Thanks in advance!
Hi Team,    Good day!    I need to send alert notification to teams by connectors, I had filled the mandatory fields in Alert actions in MS Teams, But I can't see notifications in MS Teams I can ... See more...
Hi Team,    Good day!    I need to send alert notification to teams by connectors, I had filled the mandatory fields in Alert actions in MS Teams, But I can't see notifications in MS Teams I can see as a triggered alerts,   Can any one help me to sort it out,   Thanks in Advance! Manoj Kumar S  
Hi all, In our infrastructure we are integrating a heavy forwarder belonging to another company. We would need this HF to send logs to both siems, below is a diagram: In our company (APP1): Uni... See more...
Hi all, In our infrastructure we are integrating a heavy forwarder belonging to another company. We would need this HF to send logs to both siems, below is a diagram: In our company (APP1): Universal Forwarder -> Heavy Forwarder -> Splunk Cloud Company to integrate (APP2): Universal Forwarder -> Heavy Forwarder -> Splunk On-Prem here are the output files: ---APP1--- [tcpout] defaultGroup = splunkcloud_APP1 useAck=true [tcpout:splunkcloud_splunkcloud_APP1] server = inputs1.APP1-splunkcloud.splunkcloud.com:9997, inputs2.APP1-splunkcloud.splunkcloud.com:9997, inputs3.APP1-splunkcloud.splunkcloud.com:9997, inputs4.APP1-splunkcloud.splunkcloud.com:9997, inputs5.APP1-splunkcloud.splunkcloud.com:9997, inputs6.APP1-splunkcloud.splunkcloud.com:9997, inputs7.APP1-splunkcloud.splunkcloud.com:9997, inputs8.APP1-splunkcloud.splunkcloud.com:9997, inputs9.APP1-splunkcloud.splunkcloud.com:9997, inputs10.APP1-splunkcloud.splunkcloud.com:9997, inputs11.APP1-splunkcloud.splunkcloud.com:9997, inputs12.APP1-splunkcloud.splunkcloud.com:9997, inputs13.APP1-splunkcloud.splunkcloud.com:9997, inputs14.APP1-splunkcloud.splunkcloud.com:9997, inputs15.APP1-splunkcloud.splunkcloud.com:9997 compressed = false clientCert = /opt/splunk/etc/apps/APP1/default/APP1-splunkcloud_server.pem sslCommonNameToCheck = *.APP1-splunkcloud.splunkcloud.com sslVerifyServerCert = true useClientSSLCompression = true autoLBFrequency = 120   ---APP2--- [tcpout:APP2] server = 172.28.xxx.xxx:9997 autoLBFrequency = 180 compressed = true clientCert = $SPLUNK_HOME/etc/auth/server.pem sslPassword = [] sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem sslVerifyServerCert = false So we have two apps and we tried to merge them, so as to have a single app with a single output file and the certificates in the same folder. We also implemented the necessary CMs for communications and created the same indexes on the splunk cloud. We applied these configurations to the company's HF to be integrated. The problem is that it only communicates with its on-prem Splunk. Thanks in advance.  
Dear All We moved to splunk 8.2.11 and since then, our selected fields keeps resetting every time I logout. Is this a known issue?
Hello, I have a task with two steps  Create an app taht will increase local account password complexity from 8 chars to 18chars , push it from deployer to SHC Using REST API Calls Update loca... See more...
Hello, I have a task with two steps  Create an app taht will increase local account password complexity from 8 chars to 18chars , push it from deployer to SHC Using REST API Calls Update local admin account password with long 18chars random generated string  I found the file I need to update it is under : /opt/splunk/etc/system/local/authentication.conf, I can create an APP folder on deployer such as /opt/splunk/etc/shcluster/apps/EXAMPLE_PASSWORD_COMPLEXITY_APP How do I rout that the file inside updates the file in /opt/splunk/etc/system/local/authentication.conf And on point 2. if anyone has the API Call to update SH password for local account  I woul appreciate it .
Hi, I'm trying to extract the matching patterns 35255955, 35226999, 35162846 ...etc untill end of the string with matching one into patch_number field from the string below before <space> and after... See more...
Hi, I'm trying to extract the matching patterns 35255955, 35226999, 35162846 ...etc untill end of the string with matching one into patch_number field from the string below before <space> and after the ;(semi-colon) I tried use below rex in regex101.com and tested, which worked for me with  ([^\s<patch_number>]+;) but when i apply same in Splunk, it's not working, it's giving me error below query = index = ** sourcetype=** | rex field=_raw "([^\s<patch_number>]+;)" Error in 'rex' command: The regex '([^\s<patch_number>]+;)' does not extract anything. It should specify at least one named group. Format: (?<name>...).   I'm looking at the result for the field - patch_number is 35255955,35226999,35162846 ....etc in splunk   Event String:- Domain=dfs1_sit2_osb 35255955;SOA Bundle Patch 12.2.1.4.230404 35226999;WLS PATCH SET UPDATE 12.2.1.4.230328 35162846;FMW Thirdparty Bundle Patch 12.2.1.4.230309 35159582;OWSM BUNDLE PATCH 12.2.1.4.230308 35148842;ADF BUNDLE PATCH 12.2.1.4.230306 35035861;RDA release 23.2-20230418 for OFM 12.2.1.4 SPB 33950717;OPSS Bundle Patch 12.2.1.4.220311 1221417;Coherence Cumulative Patch 12.2.1.4.17 34765492; 34542329;One-off 33639718;33639718 - ADR FOR WEBLOGIC SERVER 12.2.1.4.0 JUL CPU 2022 33903365;One-off 32720458;JDBC 19.3.0.0 FOR CPUJAN2022 (WLS 12.2.1.4, WLS 14.1.1) 33093748;One-off 32455874;One-off 32121987;OSB Bundle Patch 12.2.1.4.201105 31101362; 30997624;One-off 30741105;One-off 30700379;One-off 30455072;One-off 28970552;One-off 26573463;One-off 22526026;One-off 18387355;One-off OPatch succeeded.   Kindly help me.   Regards, Satheesh   
Splunk logs visible after 5hrs:30 mins in splunk UI for example , if I have to see the log of 13:00 to 14:00 , in UI I have to check for 18:00 to 19:00 . Here splunk forwarder docker container work... See more...
Splunk logs visible after 5hrs:30 mins in splunk UI for example , if I have to see the log of 13:00 to 14:00 , in UI I have to check for 18:00 to 19:00 . Here splunk forwarder docker container works as a sidecar container alongside application container with same source volume mounted to both of the containers .     Can someone help what could be wrong here ??  
Hello, we have a Splunk enterprise account setup on premises, and we have some hosts that are still running on the old 'i386' architecture. I recently had to refresh one, but I do not have the Splunk... See more...
Hello, we have a Splunk enterprise account setup on premises, and we have some hosts that are still running on the old 'i386' architecture. I recently had to refresh one, but I do not have the Splunk forwarder setup files for that environment. How can I get the most recent 'Splunkforwarder' setup files for "Linux i386" architecture please? Reaching out to support as directed on 'previous releases' download page has not gotten me anywhere 2 weeks down the line now.
Hello, I am new to Splunk. Please help me write a query to get count of response by ServcieName(displayed in rows) and by response code. The response code and Service name are dynamic. eg: S... See more...
Hello, I am new to Splunk. Please help me write a query to get count of response by ServcieName(displayed in rows) and by response code. The response code and Service name are dynamic. eg: Service        201              200            400       401        500     503 ServiceA         2900         1023          0              12            3         6 ServiceB         1649         677             1              1                 3         6
Dear splunkers: I want to adapt my plugin to the Splunk cluster.  I have already set up the search head and peer cluster (one main node, two search head nodes, two peer nodes). But I don't know h... See more...
Dear splunkers: I want to adapt my plugin to the Splunk cluster.  I have already set up the search head and peer cluster (one main node, two search head nodes, two peer nodes). But I don't know how Splunk clusters work. How can I test to prove that my plugin is suitable for Splunk clusters. Currently, the index database created from peer nodes can search for relevant data in the search header node. Can you help me? By the way:  The data I input from the peer node takes a long time to be found in the search header node. Why is this?
I have data like this- I need to represent this in bar chart-where x-axis(month) and y-axis(Total_value), total_value is calculated as (Category_no/Total_no)*100 for each month I have to repre... See more...
I have data like this- I need to represent this in bar chart-where x-axis(month) and y-axis(Total_value), total_value is calculated as (Category_no/Total_no)*100 for each month I have to represent this in bar chart,the problem is like i have more than one value for each month based on category_name-is there any way how I can combine these values of TOTAL_VALUE and show that under its respective Month say for example-incase of JAN month-I need to get something like-((41+90)/(41+594))*100=Total_value (This total value has to be shown under JAN month). is there any way to do this ?  
Hello Community, Are there any Add-ons available for Dell Unity, HPE Synergy and Dell MX? if no then please suggest how to integrate them with Splunk? Regards, Eshwar
hi, all,  I have an index=myindex, and with two data sourcestype  sourcetype1 includes some IP subnet information just as below: Description  SubnetID      NetStart  NetEnd   NetBit... See more...
hi, all,  I have an index=myindex, and with two data sourcestype  sourcetype1 includes some IP subnet information just as below: Description  SubnetID      NetStart  NetEnd   NetBits NetMask Site other_fields 10.168.64.0 10.168.64.0/24 10.168.64.0  10.168.64.255 24 255.255.255.0     100.108.95.68 100.108.95.68/30 00.108.95.68 100.108.95.71 30 255.255.255.252     100.108.24.24  100.108.24.24/30  100.108.24.24  100.108.24.27 30 255.255.255.252       sourcetype2 provides the information about device, include IP address Device_Name  Mgmt_IP  Site other_fields my_device_1 100.108.65.75     my_device_4 100.108.95.70     my_device_10 10.168.64.68     I would like to find the unused IP addresses in every IP range at a specific site. Any information or guidance will be very appreciated! Thank you in advance!      
I have an index named "Linux" and a CSV file called "sample.csv" with multiple columns, including "IP" and "Host." My objective is to retrieve the host values from the index data that match the hos... See more...
I have an index named "Linux" and a CSV file called "sample.csv" with multiple columns, including "IP" and "Host." My objective is to retrieve the host values from the index data that match the host values in the CSV file. In the index data, the host values encompass not only host names but also IP addresses and hosts with DNS information. Conversely, the host values in the CSV file solely consist of host names. I need to utilize the "mvappend" function for ip and host, which is fine)  However, my concern pertains to obtaining results that match the host values in the CSV file, similar to the way we construct search queries. For instance, if we were to search "index=linux host1 OR host2," it would return values that match the host names in the raw data, such as "host1.dns.com" and so on. Yet, when matching with the CSV file, it searches for the entire host name in the file.
I want to search for Okta Logs to find users that logged in from rare countries. So typically, users who logged from USA, UK, Australia is considered BAU but those from Kuwait, Lesotho, etc are rare.... See more...
I want to search for Okta Logs to find users that logged in from rare countries. So typically, users who logged from USA, UK, Australia is considered BAU but those from Kuwait, Lesotho, etc are rare. So far, I have done this.      index=* sourcetype="OktaIM2:log" eventType="user.session.start" outcome.result="success" client.geographicalContext.country!=null daysago=30 | stats values(user), values(client.ipAddress), values(actor.displayName) count by client.geographicalContext.country | sort count | where count < 20     It returned results like this which isnt that accurate. Like for the first row, it gives user 1 and user 2. My current search query gives total results of 20 logins from user 1, 2 and 3. So meaning user 1 could be 1 login, user 2 15 logins, user 3 5 logins.  Uzbekistan user1 user2 user3 3 Slovakia user1 1   What i want is to have at least more than 5 logins and less than 20 for that particular user to show that there is some activity ongoing. So user 2 for example who had 15 logins from rare country will be displayed, but user 1 who only had a login from the rare country will not be displayed. How do I get to this? Thanks. 
How to delete events which is decreasing inbetween. I have extracted the _time column using regex so that splunk default sorting won't happens. _time Warning 2021-08-09 12:26:55.7852 ... See more...
How to delete events which is decreasing inbetween. I have extracted the _time column using regex so that splunk default sorting won't happens. _time Warning 2021-08-09 12:26:55.7852 INFO 2021-08-09 12:26:56.2278 INFO 2021-08-09 12:26:56.2278 INFO 2021-08-09 12:26:56.3939 ERROR 2021-08-09 12:26:39.2861 INFO 2021-08-09 12:26:40.3430 ERROR 2021-08-09 12:26:41.3482 INFO 2021-08-09 12:26:41.4832 WARN 2021-08-09 12:26:41.7433 WARN 2021-08-09 12:26:41.7433 INFO 2021-08-09 12:26:41.7433 INFO 2021-08-09 12:26:54.8140 ERROR 2021-08-09 12:26:55.4640 INFO 2021-08-09 12:26:55.8192 INFO 2021-08-09 12:26:56.8794 ERROR 2021-08-09 12:26:57.8846 INFO 2021-08-09 12:26:58.9398 ERROR 2021-08-09 12:26:59.9450 WARN 2021-08-09 12:26:59.9700 ERROR 2021-08-09 12:26:59.9700 INFO 2021-08-09 12:27:00.8201 INFO 2021-08-09 12:27:00.8401 INFO 2021-08-09 12:27:01.0352 ERROR 2021-08-09 12:27:00.8901 INFO 2021-08-09 12:27:00.8701 INFO 2021-08-09 12:27:01.0452 ERROR   It should ignore the events which i marked in "arial black", because "seconds" value starting decreasing.  
Hello, Currently we have AWS-Add-On  Version 6.1.0  and it's been configured to get data from S3 Buckets. But we are planning to update from currently used Version 6.1.0 to Version 7.0.0. How would ... See more...
Hello, Currently we have AWS-Add-On  Version 6.1.0  and it's been configured to get data from S3 Buckets. But we are planning to update from currently used Version 6.1.0 to Version 7.0.0. How would I proceed with this update? I am concerned about the impact on currently used configurations (for Version 6.1.0). Any recommendations would be highly appreciated. Thank you!