All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Does "Add-on for Atlassian JIRA Service Desk alert action" plugin compatible with "Jira Service Desk"? This application name contains in the title of the plugin, but it doesn`t mentioned in plugin i... See more...
Does "Add-on for Atlassian JIRA Service Desk alert action" plugin compatible with "Jira Service Desk"? This application name contains in the title of the plugin, but it doesn`t mentioned in plugin instruction. Overiew says: "The Add-on is compatible with: - JIRA Server - JIRA Cloud - JIRA Data center" But can I create issues in Jira Service Desk with this plugin when alert triggered?  
Hello to all, following problem make  some trouble for me, hope u can help. In a Search-Head-Cluster all Peers have under "splunk/etc/system/local" a distsearch.conf. There is a Stanza which i w... See more...
Hello to all, following problem make  some trouble for me, hope u can help. In a Search-Head-Cluster all Peers have under "splunk/etc/system/local" a distsearch.conf. There is a Stanza which i want to delete, but after a restart it suddenly appears again. What i tried was... - delete Stanza on every peer - After delete Stanza on every instance restart the cluster (splunk rolling-restart) - Check deployer for apps After this, the Stanza appeard again. Example: I want this: [distributedSearch] servers = https://server1:8089, https://server2:8089, https://server3:8089  look like this: [distributedSearch] servers = https://server1:8089, https://server3:8089  On my deployer is no app which will affect the distsearch.conf in my SHC. Normaly an app would go under /splunk/etc/apps. I Just inherited the Environment and not 100% sure about every connection. Thank you for your help/comments
Has anyone had to match two fields values using a wildcard in one of the fields values.  My scenario, I have a host field that looks like this host=server1 , I have a dest field like this, dest=ser... See more...
Has anyone had to match two fields values using a wildcard in one of the fields values.  My scenario, I have a host field that looks like this host=server1 , I have a dest field like this, dest=server1.www.me & dest=server1.xxx.com & dest=comp1.  I'm trying to find all instances where the host field with a wildcard matches the dest field.  This is the query I have so far without the filter index="winevents" host=* | stats dc(dest) as total values(dest) count by host | search total > 1 results look like this: host            total       values(dest) server1          3          server1.www.me                                       server1.xxx.com                                       comp1 How can I filter only where the host field somewhat matches the dest field.  So results will look like this excluding the 3rd dest name of comp1. host            total       values(dest) server1          2          server1.www.me                                       server1.xxx.com I tried this but get no results index="winevents" host=*| eval host=host + "*" | search host=dest | stats dc(dest) as total values(dest) count by host | search total > 1
Hi,  Below is the information from one of my logs.  "Information","ajp-nio-127.0.0.1-8016-exec-642","11/24/20","13:30:14","CLIENT_URL","samlServices.processRequest: stuUserReturn: {""update_user_de... See more...
Hi,  Below is the information from one of my logs.  "Information","ajp-nio-127.0.0.1-8016-exec-642","11/24/20","13:30:14","CLIENT_URL","samlServices.processRequest: stuUserReturn: {""update_user_details"":1,""processByCf"":true,""USER_LOGIN"":""XXXXX@YYYY.org"",""userID"":""XXXXXXX"",""user_id"":XXXXX,""connection_name"":""XXXX"",""login"":""XXXXX@YYYY.org"",""userAttributes"":{""group_name"":""HR"",""telephone"":"""",""country"":1,""preferredLanguage"":"""",""login"":"""",""organisation"":""XXXXX English"",""last_name"":""XXXXXX"",""email"":""XXXX@YYYY.org"",""first_name"":""XXXX"",""company"":""XXXXX English""},""saml_id"":1,""loginStatus"":""success""}"   The last bit loginStatus"":""success" will be  loginStatus"":""failed" in case of failure.    I want to create create a chart/dashboard where I can get number of success requests compared to failures over a period of time. eg 30 days   Can someone please help me sort this out. Thanks.    M
I have a panel in a dashboard running correctly, but the data doesn`t appear in this dashboard every day.   What I want is to know, is it possible to show another panel instead of this, when I ... See more...
I have a panel in a dashboard running correctly, but the data doesn`t appear in this dashboard every day.   What I want is to know, is it possible to show another panel instead of this, when I get no results in this search query? It is not comfortable to look at this empty hole.   I mean, if there are results - show this panel. If there are no results - show another one instead of this.
Hello! I am struggling to mask the last 4 digits of my numbers.   | rex field=FIELD_XY mode=sed "s/[0-9#]{3}$/###/g"   With this code I am able to mask the last 4 digits of all kind of numbers i... See more...
Hello! I am struggling to mask the last 4 digits of my numbers.   | rex field=FIELD_XY mode=sed "s/[0-9#]{3}$/###/g"   With this code I am able to mask the last 4 digits of all kind of numbers in my table to ####. So the numbers looking like : 123456####. What I cannot do is to apply this masking only those numbers which are 8 digits or more long.  Tried several options and played with regex, but it didn't mask it or over masking everything . Thank you!
Rather than using the creds, I want to use tokens in Splunk DB Connect to get the data from Spark database. Q1: Can we change the Auth Mech to token based, while creating the identities or conne... See more...
Rather than using the creds, I want to use tokens in Splunk DB Connect to get the data from Spark database. Q1: Can we change the Auth Mech to token based, while creating the identities or connections in Splunk DB Connect app? Q2: Can we put authentication parameters into the JDBC url itself? The possible JDBC url for connecting to the database.   jdbc:spark://<hostname>:<port>/<database>;transportMode=<transport_mode>;ssl=1;httpPath=http;AuthMech=3;UID=token;PWD=<token>     Some reference Link for Simba driver: https://www.simba.com/products/Spark/doc/JDBC_InstallGuide/content/jdbc/options/authmech.htm 
After upgrading to Splunk Enterprise 8.1, I seems to be encountering same issue with https://community.splunk.com/t5/All-Apps-and-Add-ons/Lookup-Editor-Won-t-save-edits-after-update-to-Splunk-Cloud-... See more...
After upgrading to Splunk Enterprise 8.1, I seems to be encountering same issue with https://community.splunk.com/t5/All-Apps-and-Add-ons/Lookup-Editor-Won-t-save-edits-after-update-to-Splunk-Cloud-8-0/m-p/505552   Yet adding 'upload_lookup_files' did resolve my issue. Would anyone happens to know capabilities needed for upload/edit/delete the tables?
Hi all, I am using data from 3 different indexes. They contain events which can be attributed to specific transactions through ID. There are multiple transactions and each transaction contains event... See more...
Hi all, I am using data from 3 different indexes. They contain events which can be attributed to specific transactions through ID. There are multiple transactions and each transaction contains event from multiple indexes. The transaction can look like 1) event from index 1, 2) event from index 2, 3) event from index 1, etc. I would like to get only events for which the transaction starts with A and ends with B or C. I was thinking to use transaction but it would be way too slow to get the events. I was trying to work out with stats but I end up getting all events, not only the events that start with A and end with B or C.  The result should be also list of events, i.e. no chart or visualizations. Any ideas?   (index=x) OR (index=y) OR (index=z) | stats list(*) as * by ID Time | fields - a,b,c  
How can i route this kind of data to there proper index. Data: transaction_1 transaction_2 transaction_01  transaction_02 transaction_11 transaction_12   Condition: transaction_1 - transati... See more...
How can i route this kind of data to there proper index. Data: transaction_1 transaction_2 transaction_01  transaction_02 transaction_11 transaction_12   Condition: transaction_1 - transation_non_zero (index name) transaction_2 - transation_non_zero transaction_01  - transation_w_zero transaction_02 - transation_w_zero transaction_11 - global_unmatched_index transaction_12 - global_unmatched_index   global_unmatched_index - is an index where all data that does not matched (transation_non_zero,transation_w_zero) indexes are located. Also the requirement is to use props and transforms
It seems that even latest PaloAlto APP is failing zh-CN locale under Enterprise 8.1 environment, though it worked well in Enterprise 7.2. Would anyone know how to workaround this bugger? Search head... See more...
It seems that even latest PaloAlto APP is failing zh-CN locale under Enterprise 8.1 environment, though it worked well in Enterprise 7.2. Would anyone know how to workaround this bugger? Search head web page hit HTTP 500 completely... Thanks a lot!:)
Hi, I have some syslog cisco. In this log I have login success and failed (field1), the authentication method(field2) and some mac address (field3). I need to create a dashboard that contains only t... See more...
Hi, I have some syslog cisco. In this log I have login success and failed (field1), the authentication method(field2) and some mac address (field3). I need to create a dashboard that contains only the mac address that have only login failed with authentication method equal to ethernet, so they can't have failed login (or success) with authentication method wireless. Then in another dashboard I need to put the mac address that have login failed with authentication method equal to ethernet and at least one attempt (success or failed) with wireless. I don't know if the problem is clear. I have a query like this: index=..................... | stats values(field1) as status by field3  | where mvcount(status)=1 and status="failed" | dedup field3 | stats count  This query should take only mac address which have only failed login, I don't know if could help.   Thanks in advance!!
Hi there we noticed we are not getting some logs coming through @ some hours in the morning after log rotation. so we ran the below query.   index=_internal host=* /opt/workfusion/supervisord/log/w... See more...
Hi there we noticed we are not getting some logs coming through @ some hours in the morning after log rotation. so we ran the below query.   index=_internal host=* /opt/workfusion/supervisord/log/workfusion.out.log NOT Metrics earliest=-7d latest=now | timechart span=5m count as NumInt   here's the result below 11-24-2020 01:30:03.080 +0200 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/workfusion/supervisord/log/workfusion.out.log'.   11-19-2020 01:30:04.536 +0200 INFO WatchedFile - Logfile truncated while open, original pathname file='/opt/workfusion/supervisord/log/workfusion.out.log', will begin reading from start.   How can I fix this since it's affecting our dashboard because there are no results or logs so the dashboard is empty.       
i need to find daily indexing volume but without using any internal indexes..
Hey, I am working towards Slunk Fundamentals 1 and doing the eLearning assignments. Currently on Module5. I have imported the labs materials and such, and I am supposed to look for area called "What ... See more...
Hey, I am working towards Slunk Fundamentals 1 and doing the eLearning assignments. Currently on Module5. I have imported the labs materials and such, and I am supposed to look for area called "What to search" in Searching and Reporting. If I search anything then I can access the data, but I don't see the suggestions that "What to search" would provide me. Any idea how to turn it back on or how to activate it?   How my interface looks right now:
Hi all, I have a time format like 13 Days, 8 Hours, 34 Minutes and need to convert to seconds so I can sort on a dashboard high to low. I would like to keep the 13 Days, 8 Hours, 34 Minutes format ... See more...
Hi all, I have a time format like 13 Days, 8 Hours, 34 Minutes and need to convert to seconds so I can sort on a dashboard high to low. I would like to keep the 13 Days, 8 Hours, 34 Minutes format just need to add a field in seconds to sort by
Want to count all events from specific indexes say abc, pqr and xyz only for span of 1h using tstats and present it in timechart. Tried this but now working  | tstats count WHERE earliest=-1d@-3h ... See more...
Want to count all events from specific indexes say abc, pqr and xyz only for span of 1h using tstats and present it in timechart. Tried this but now working  | tstats count WHERE earliest=-1d@-3h latest=now index=ABC,PQR,XYZ by index, _time span=1h | timechart sum(count) as count by index.
in creating a dashboard that processes firewall log data with traffic types. What information should be visualized? what detection should be done?
I have data feed into splunk via forwarder. I want to count the events per for the time picker selected by user.    index=default sourcetype=trans_logs host="abcd.rangarbus.com" source=/logs/... See more...
I have data feed into splunk via forwarder. I want to count the events per for the time picker selected by user.    index=default sourcetype=trans_logs host="abcd.rangarbus.com" source=/logs/transfer_report_*.log | timechart span=1h count | timewrap 1d series=exact | eval time=strftime(_time, "%H:%M") | fields - _time | fields + time, * | sort by time   I have selected last 7 days in date/time picker. Attached is the result I get in splunk. It shows Nov22 at the end, but ideally i should be Nov23.  What should i change here to have timewrap per day with exact date on the column title.?
I have a data source that is being ingested into Splunk using a default field extraction which is working fine.  The data looks like:   DateTime=2020-11-24-10.38.00.869407,type=New-Request,Userna... See more...
I have a data source that is being ingested into Splunk using a default field extraction which is working fine.  The data looks like:   DateTime=2020-11-24-10.38.00.869407,type=New-Request,Username=9999999,Client-Mac=F8-4E-73-xx-xx-xx,Called-Station-Id=A0-D3-C1-zz-zz-zz,SSID=myWiFi,NAS-IP=192.168.141.130,Nas-Identifier=CISCO_AP:CN3AD338P5,NAS-Port-Type=Wireless-802.11,Campus=SMB,Location=SMB Buildings HI   The data is being parsed correctly and I get the field name / value pairs in Splunk no problem (field_name=value).  The issue I have is the last field, Location. The default field extraction is extracting the Location field however if the value contains spaces I am only getting up to the first space as the value in the indexed data.  From the above example, my Location data is returning "SMB" only and not "SMB Buildings HI".  Is there any way to resolve this to either prevent it splitting the value at the space, or to replace the space with another character such as '_'.