All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have installed Splunk Enterprise system with multiple users. Each our user has access only to specified indexes. In our Searchhead I have installed Splunk DB Connect App. This app includ... See more...
Hi, I have installed Splunk Enterprise system with multiple users. Each our user has access only to specified indexes. In our Searchhead I have installed Splunk DB Connect App. This app include two user roles: db_connect_admin (with admin permissions) db_connect_user (with user permission) To allowed my users (~400 users) use Splunk DB Connect App I have assigned for each user new role - db_connect_user. After few weeks one of my users discovered that he has full access to all indexes. I was really surprised because till now everything was restricted. I have reviewed all roles and I realised that each user with assigned role db_connect_user has full access to all indexes. This is enterprise system with a lot of indexes with sensitive informations. Problem is generated by this field (Role -> Indexes -> All non-internal indexes) which cannot be deactivated in GUI (or I do not know how to do it - maybe some one will help here) : I have got information from support that this capability cannot be deactivated, which is wrong. I have deinstalled Splunk DB Connect App - and everything get back to normal. I just would like to warn all users, that installation of that addon generating high risk of data leak. I have opened ticket to support but as I see our discussion going to nowhere... Maybe some one will be able to help me and tell me how to deactivate in role field "indexes > All non-internal indexes " ??? I`m using latest release of that App and Splunk 8.0. I appreciate any hints. Cheers Konrad
As title , Did anyone know how to plot alt textsuch attack graph in splunk? Can Splunk Dashboard draw a GEO Attack Graph ? I know there is a query like this, alt text FW blocked log | ip... See more...
As title , Did anyone know how to plot alt textsuch attack graph in splunk? Can Splunk Dashboard draw a GEO Attack Graph ? I know there is a query like this, alt text FW blocked log | iplocation src_ip | geostats count as TOTAL but this cannot display the relationship between source and destination . I need a arrow vector to display the direction which likes the picture i have uploaded. thanks regards Max
Hi Splunkers, My logs are like below with same set of logs for different WAS ear's.. earFile=abc.ear ................................... Error1: Exception with DMGR..... Dbjbafjbjasbfbuas... See more...
Hi Splunkers, My logs are like below with same set of logs for different WAS ear's.. earFile=abc.ear ................................... Error1: Exception with DMGR..... Dbjbafjbjasbfbuasbhcbjsa earFile=qrs.ear ................................... Error2: SOAP exception.. skbdjasbjdgajsgdgush My query should seach 'Error1' and 'Error2' keyword. In result, it should shows whole error message.. For eg, If i search 'Error1' & 'Error2' in my query, output should be like below in table format... Host EAR_Name Error xyz abc.ear Error1: Exception with DMGR..... Dbjbafjbjasbfbuasbhcbjsa xyz qrs.ear Error2: SOAP exception.. skbdjasbjdgajsgdgush
I am fairly new to Splunk Enterprise and I read the documentation about writing cron expressions. Does Splunk support special characters when you are writing cron expressions? Example of special ch... See more...
I am fairly new to Splunk Enterprise and I read the documentation about writing cron expressions. Does Splunk support special characters when you are writing cron expressions? Example of special characters: * , - ? L W Example cron syntax: 0 */2 * ? * *
Hi Splunkers! I've a doubt regarding searchmatch function, when I tried excluding some string using NOT boolean inside a searchmatch..it is not working fine although AND/OR Boolean is working fine... See more...
Hi Splunkers! I've a doubt regarding searchmatch function, when I tried excluding some string using NOT boolean inside a searchmatch..it is not working fine although AND/OR Boolean is working fine.. Can't we use NOT while using searchmatch in query? Below is my sample query: index=xxx source=yyy "Issue-1111" OR "Issue-1122" OR "Failure-1212" OR "Failure-1111" OR "Failure-" |eval Result=case(searchmatch("Issue-1111"), "Desc 1", searchmatch("Issue-1122"), "Desc 2", searchmatch("Failure-1212"), "Desc 3", searchmatch("Failure-1111"), "Desc 4", (searchmatch("Failure-") NOT searchmatch("Failure-1111") NOT searchmatch("Failure-1212") , "All Failures Excluding Desc3&4")) |stats count by Result Thanks in Advance!
hello im trying to calculate min and max time of event (the time when the event started and when its ended) when im adding my calculation to the query im loosing the results of the rest of the qu... See more...
hello im trying to calculate min and max time of event (the time when the event started and when its ended) when im adding my calculation to the query im loosing the results of the rest of the query and getting only the result of the time calculation... what am i missing ? index=prod eventtype="csm-messages-dhcpd-lpf-eth0-listening" OR eventtype="csm-messages-dhcpd-lpf-eth0-sending" OR eventtype="csm-messages-dhcpd-send-socket-fallback-net" OR eventtype="csm-messages-dhcpd-write-zero-leases" OR eventtype="csm-messages-dhcpd-eth1-nosubnet-declared" | transaction maxpause=2s maxspan=1s maxevents=5 | stats first(_time) as min_time, last(_time) as max_time | table _time,eventcount, eventtype, tail_id, kafka_uuid,min_time,max_time | foreach eventtype [eval flag_eventtype=if(eventcount!=5,"no", "yes")]
Hi, Can someone please help in getting the field extracted: "x-hello-abc":["101.2.10.1, 102.3.4.3, 12.3.45.5"] Please help in getting a regex expression to extract this field
I always saw these "OS" and "Windows" tags on the eventtypes.conf and tags.conf. It's on the production environment and splunkbase applications even we're only using default Splunk CIM. OS- can be ... See more...
I always saw these "OS" and "Windows" tags on the eventtypes.conf and tags.conf. It's on the production environment and splunkbase applications even we're only using default Splunk CIM. OS- can be part of Performance datamodel, how about windows ? What datamodel does it belongs ?
Dear all I have an issue with a new dedicated Search Head for ES. My Splunk architecture is quite simple. 4 clustered indexers, 3 SH, and 1 dedicated SH for ES everything is working fine, exce... See more...
Dear all I have an issue with a new dedicated Search Head for ES. My Splunk architecture is quite simple. 4 clustered indexers, 3 SH, and 1 dedicated SH for ES everything is working fine, except the ES notable events : the index notable remains empty. In the SH, search peer list : everything is OK, indexers are defined All the events of the SH are sent to the indexers, the notable index has been created there I have correlation searches active with 2 actions : notable and json alerting. - JSON alerting is OK - notable : not OK If I manually create a notable event on ES, I can see it in the index main, with a strange sourcetype (“stash_common_action_model-too_small”) I have found it with the following request “index=main notable” “1589789135, search_name="Manual Notable Event - Rule", _time="1589789135", app="SplunkEnterpriseSecuritySuite", creator="XXXX", info_max_time="+Infinity", info_min_time="0.000", info_search_time="1589789135.348247000", owner="XXXX", rule_description="hello world", rule_title="test5", security_domain="access", status="1", urgency="informational" => is there an issue with this sourcetype / index ? => or any idea for troubleshooting ? thank you in advance PS: a Splunk ticket was opened 10 days ago ... but I am still stuck
Hi all, In our environment, we have several Windows UF managed by a deployment server. We didn´t apply any change on the forwarders, and some of them are unable to send some of the data to the inde... See more...
Hi all, In our environment, we have several Windows UF managed by a deployment server. We didn´t apply any change on the forwarders, and some of them are unable to send some of the data to the indexers. The data we are not receiving is the one wich come from a file (internal logs, monitoring files, etc). TCP/UDP inputs are working fine. We have checked the permissions, and splunk user has total control over splunk folders and log files folders. We also reset fishbucket in order to discard any issue with it. No errors appear on the splunkd log inside the UF. Does anybody know how to troubleshoot this? Thanks in advance.
Hi , Our app is using Splunk iOS SDK v5.2.4. User id is set after successful login using Mint.sharedInstance().userIdentifier. But we are not able to see the user id in all error instances. Only fo... See more...
Hi , Our app is using Splunk iOS SDK v5.2.4. User id is set after successful login using Mint.sharedInstance().userIdentifier. But we are not able to see the user id in all error instances. Only for some error instance, we can see user id. Please guide.
Hi Support, We are trying to deploy Varonis App and Varonis Technology Add-On( AKA Varonis TA) on Splunk. According to VARONIS DATALERT APP AND TECHNOLOGY ADD-ON FOR Splunk User guide. We in... See more...
Hi Support, We are trying to deploy Varonis App and Varonis Technology Add-On( AKA Varonis TA) on Splunk. According to VARONIS DATALERT APP AND TECHNOLOGY ADD-ON FOR Splunk User guide. We install Varonis App on one of Splunk search head, and Varonis TA on Splunk heave forwarder. However, we can not see any information on Splunk. So, we contact Varonis Support and make sure the Varonis DatAlert is find to send syslog to Splunk, and Varonis Support replayed that Varonis only certified with Splunk single-server environment. The environment we using now is a cluster-server environment. We had search heads, indexer, forwarder and heavy forwarder. So, did Splunk Support have any experience about Varonis App and Varonis TA deploy on Splunk cluster-server environment? Or any other suggestion for us, please? Thank you.
The link point to: http://hostname/en-US/app/splunk_app_jenkins/test?master=***&job=***&build=*** but the real link is: http://hostname/en-US/app/splunk_app_jenkins/testAnalysis?master=**... See more...
The link point to: http://hostname/en-US/app/splunk_app_jenkins/test?master=***&job=***&build=*** but the real link is: http://hostname/en-US/app/splunk_app_jenkins/testAnalysis?master=***&job=***&build=***
Hi All, I am fetching data from the data base and have the below fields (no raw time provided): 1. Date field (eg. 2020-04-28 00:00:00.0 ["%Y-%m-%d %H:%M:%S.%Q" format] 2. Status field (eg. A,... See more...
Hi All, I am fetching data from the data base and have the below fields (no raw time provided): 1. Date field (eg. 2020-04-28 00:00:00.0 ["%Y-%m-%d %H:%M:%S.%Q" format] 2. Status field (eg. A,B,C,D,E) How can I have the below nested hour columns be inside the daily field?: (I have attached the image of what I want to achieve but in Splunk) 1. 10am - 12pm 2. 12pm - 3pm 3. 3pm - 6pm 4. 6pm - 10am Currently, I am only able to achieve daily view of status, but now I want the status for above hours within a daily view in a column bar chart. My current query to achieve the daily view of status in column bar chart. | dbxquery query=" " connection=" " | eval create_date = strptime(CREATED_DT, "%Y-%m-%d %H:%M:%S.%Q") | where create_date >= relative_time(strptime(strftime(now(),"%d-%b-%y"),"%d-%b-%y") , "-3d") AND create_date <= strptime(strftime(now(),"%d-%b-%y %H:%M:%S.%Q"), "%d-%b-%y %H:%M:%S.%Q") | eval create_date_new = strftime(create_date,"%d-%b-%y") | chart count over create_date_new by STATUS Appreciate if anyone can help me with this issue. (Attached screenshot of what I want) Thanks! Zovin ][1]
Hi,  I have a question. My controller is on-prem, my license is AppDynamics Lite. But when I start the java agent, there was an error in my log. It said 'Agent license request denied. Agent ty... See more...
Hi,  I have a question. My controller is on-prem, my license is AppDynamics Lite. But when I start the java agent, there was an error in my log. It said 'Agent license request denied. Agent type: Java; Host: xuyxg; License Rule: Default; Reason: No license found for account [Redacted]' I don't seem to have any right to monitor. Is there a problem with my configuration? and my controller-info.xml is: ^ Edited by @Ryan.Paredez to remove screenshots that included Access Keys. Please do not share Access Key information on community posts for security and privacy reasons. 
Hi, I've inherited a poorly documented splunk deployment that seems to have been misconfigured. the universal forwarder service isnt starting on workstations due to a logon issue. Either the passwo... See more...
Hi, I've inherited a poorly documented splunk deployment that seems to have been misconfigured. the universal forwarder service isnt starting on workstations due to a logon issue. Either the password is wrong or the account it is configured with is wrong. Is there a way to determine what account is the correct account/which account the deployment server is expecting the UF to use? Many thanks in advance.
Hi, I have a dashboard with a panel where I'm showing a table of triggered alerts: | table _time, ss_name, severity | sort - _time | rename ss_name AS "Alert ... See more...
Hi, I have a dashboard with a panel where I'm showing a table of triggered alerts: | table _time, ss_name, severity | sort - _time | rename ss_name AS "Alert Name", severity AS "Severity" When a user clicks on the alert name, the dashboard populates a drill down pane. <drilldown> <condition field="Alert Name"> <set token="show_panel">true</set> <set token="selected_value">"$click.value2$"</set> <set token="selected_value_latest">$click.value$</set> <eval token="selected_value_earliest">relative_time($selected_value_latest$, "-15m")</eval> <eval token="converted_time">strftime($selected_value_latest$, "%Y-%d-%m %H:%M")</eval> </condition> <condition> </condition> </drilldown> and I'm using the converted_time token to show the user the time of the alarm they clicked. <panel> <table> <title>[Drilldown] Recent statistics for $selected_value$ at $converted_time$</title> The issue I have is that this converted_time is showing an offset time. From what I gather it's showing the time in the local computer timezone (e.g. GMT -6 where the user is logged in from) even though the user's Splunk preference is set to GMT -5. I do not want to show the time in the user's timezone but rather in GMT -5. If I run strftime in a search, e.g.: | eval converted_time= strftime(_time, "%Y-%d-%m %H:%M") | table _time converted_time The converted_time column shows the time correctly matching the _time column. But when I use strftime in the dashboard: <eval token="converted_time">strftime($selected_value_latest$, "%Y-%d-%m %H:%M")</eval> I'm getting a different result. How can I fix this?
I have an alert that searches every 15 mins for the count of events >150 (|where Count>150) for the same routing prefix and merchant name. There are 6 fields we list in the results: Routing prefix, m... See more...
I have an alert that searches every 15 mins for the count of events >150 (|where Count>150) for the same routing prefix and merchant name. There are 6 fields we list in the results: Routing prefix, merchant ID, bank ID, merchant name, and merchant category code, and Count. I want to stop duplicate emails/alerts when it's for the same merchant category, bank ID, and merchant name that's already been alerted on in the past 8 hours. Is there an optimal way to build the search to do this or a way to setup the trigger conditions that would allow for this?
We have a search that runs fine but when we schedule it as a report, we don't get the e-mail and in _internal we see - 05-26-2020 17:10:25.215 -0400 ERROR ScriptRunner - stderr from '/opt/apps/... See more...
We have a search that runs fine but when we schedule it as a report, we don't get the e-mail and in _internal we see - 05-26-2020 17:10:25.215 -0400 ERROR ScriptRunner - stderr from '/opt/apps/splunk/bin/python /opt/apps/splunk/etc/apps/search/bin/sendemail.py "results_link=https://:8000/app/search/@go?sid=scheduler__myid__search__RMD593055a08ba8cd116_at_1590527400_77786" "ssname=My test" "graceful=True" "trigger_time=1590527424" results_file="/opt/apps/splunk/var/run/splunk/dispatch/scheduler__myid__search__RMD593055a08ba8cd116_at_1590527400_77786/results.csv.gz"': _csv.Error: line contains NULL byte What might be the problem?
Hi! In the Event column, I get the following: 26/05/2020 11:24:51 > Invoice Val Increase on History Report process completed I have tried multiple ways to get the "Report" name as, ie: ... See more...
Hi! In the Event column, I get the following: 26/05/2020 11:24:51 > Invoice Val Increase on History Report process completed I have tried multiple ways to get the "Report" name as, ie: 26/05/2020 11:24:51 > Invoice Val Increase on History Report process completed How do I split that out?