All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all,   We have recently upgraded Splunk "Alert Manager" to version 3.0.7 from older version 2.1.4. After upgrade, we cannot assign Incidents to anyone.   The "Owner" field is greyed out. Ca... See more...
Hello all,   We have recently upgraded Splunk "Alert Manager" to version 3.0.7 from older version 2.1.4. After upgrade, we cannot assign Incidents to anyone.   The "Owner" field is greyed out. Can someone help to resolve this issue.
Hey Splunk support, Please can you people help me with a dev enterprise license?
I have a particular feed with 24 appliances that send their data via rest call over 8089 to a heavy forwarder which is then forwarded to the indexing cluster and indexed.  For every event for ever... See more...
I have a particular feed with 24 appliances that send their data via rest call over 8089 to a heavy forwarder which is then forwarded to the indexing cluster and indexed.  For every event for every appliance, _time is correct with the exception of three appliances. For those three appliances however, regardless of when the events are generated, _time is always 3:55:40.000 AM for appliance one, 3:25:00.000 AM for appliance two, and 3:58:00.000 AM for appliance three. And again, the other 21 appliances that send the exact same way are not having this issue. My original thought was that it was a config issue with those three appliances. But the team that manages them confirmed they were all configured the same. I have not been able to find any clues on the splunk side as to why this may be happening. Any help would be appreciated.
Hi team, I am running the latest Hurricane Labs Shodan version 2.0.8, but I am getting this error when running the saved search: 03-04-2021 13:50:53.754 ERROR KVStoreLookup - KV Store output failed... See more...
Hi team, I am running the latest Hurricane Labs Shodan version 2.0.8, but I am getting this error when running the saved search: 03-04-2021 13:50:53.754 ERROR KVStoreLookup - KV Store output failed with err: The _id field is a reserved field and may not be present in a document or query. message: 03-04-2021 13:50:53.755 ERROR SearchResultsFiles - An error occurred while saving to the KV Store. Look at search.log for more information. 03-04-2021 13:50:53.755 ERROR outputcsv - sid:1614865793.691 Could not write to collection 'shodan_output': An error occurred while saving to the KV Store. Look at search.log for more information.. It looks like that, somehow, an _id field is being generated from the search I could rename it of course, but I do not know how it is going to impact the rest of the app without troubleshooting it so I was wondering, how can I fix it quickly and, eventually, get the app updated to prevent such error on the next iterations? Thanks and regards!
I have used Splunk setup view as a replacement of setup.xml. For this, I have used Splunk JS SDK. I have a password field on the setup page. JS SDK saves the encrypted password in the 'local/FILENAM... See more...
I have used Splunk setup view as a replacement of setup.xml. For this, I have used Splunk JS SDK. I have a password field on the setup page. JS SDK saves the encrypted password in the 'local/FILENAME.conf' file. This JS SDK internally calls Splunk API. The password API(https://{HOST_NAME}/en-US/splunkd/__raw/servicesNS/nobody/{TA_NAME}/storage/passwords?count=0&output_mode=json) response returns both plain password & encrypted password. How to avoid plain password in Splunk's password API response? Thanks in advance.
Hello, I have a query (e.g. "....... " | stats count, avg(...)) and after that I get as result OwnColumn Count AVG XYZ                 20           40 As another column I would like to have the t... See more...
Hello, I have a query (e.g. "....... " | stats count, avg(...)) and after that I get as result OwnColumn Count AVG XYZ                 20           40 As another column I would like to have the time of my request (last week, last 24 hours), depending on what I selected. And that in readable time. Now I found the following command to show me the time. | addinfo | convert ctime(*) | eval reportDate=info_min_time." to ".info_max_time | table reportDate | rex field=reportDate "(?<FirstPart>.*\d+:\d+:\d+).*\s+to\s+(?<SecondPart>.*\d+:\d+:\d+)" | eval reportDate=FirstPart." to ".SecondPart | fields reportDate I customized it the query : "fields OwnColumn, reportDate, count, AVG..." so I can see my queries in the dashboard. So requested would be OwnColumn reportDate                                                                               count         AVG XXX                 02/21/2021 00:00:00 to 02/28/2021 00:00:00      20                40 However, I either get only the reportDate and all the others remain empty or it converts the data from the other queries also into a date, so that 43 (which was for example in count) then also becomes a date. How do I change the query to get what I want?  
Hello, Right now I am struggling to identify the working hours of user by Application based on Change or Authentication datamodel. the main objective is to determine the standard working time o... See more...
Hello, Right now I am struggling to identify the working hours of user by Application based on Change or Authentication datamodel. the main objective is to determine the standard working time of each user, and if that user perform any activities outside of that working time then alert will trigger. Below are the queries :- | tstats `summariesonly` c as changes_count earliest(_time) as et latest(_time) as lt from datamodel=Change by All_Changes.user All_Changes.vendor_product index _time span=1d | `drop_dm_object_name(All_Changes)` | eval time_diff=((lt-et)/60/60) | search time_diff!=0 | convert ctime(et) ctime(lt) | tstats `summariesonly` values(Authentication.signature) as signature values(sourcetype) as sourcetype latest(_time) as lt earliest(_time) as et from datamodel=Authentication.Authentication where (Authentication.is_Successful_Authentication=1)  by Authentication.user Authentication.app index _time span=1d | `drop_dm_object_name("Authentication")` | eval time_diff=((lt-et)/60/60) | convert ctime(lt) ctime(et)  | dedup user Now real challenge is time, some user works in different timezones and some might be working overnight.
I have two searches: search-A gives values like   type status hostname id port Size base cache http OFF host-1 17 NA NA NA NA http ON host-1 6 NA NA NA NA http ON... See more...
I have two searches: search-A gives values like   type status hostname id port Size base cache http OFF host-1 17 NA NA NA NA http ON host-1 6 NA NA NA NA http ON host-1 15 NA NA NA NA http OFF host-1 1 NA NA NA NA web OFF host-2 17 NA NA NA NA web ON host-2 6 NA NA NA NA http ON host-3 15 NA NA NA NA http OFF host-3 1 NA NA NA NA   Search-B gives value like type status hostname id port Size base cache available not_processed host-1 17 NA NA NA NA available not_processed host-2 17 NA NA NA NA available not_processed host-4 15 NA NA NA NA available not_processed host-5 1 NA NA NA NA   I want to merge two search in such a way that it can check hostname in search-B and if hostname is present in search-A the it should not join/merge that row.. the result should be something like below... type status hostname id port Size base cache http OFF host-1 17 NA NA NA NA http ON host-1 6 NA NA NA NA http ON host-1 15 NA NA NA NA http OFF host-1 1 NA NA NA NA web OFF host-2 17 NA NA NA NA web ON host-2 6 NA NA NA NA http ON host-3 15 NA NA NA NA http OFF host-3 1 NA NA NA NA available not_processed host-4 15 NA NA NA NA available not_processed host-5 1 NA NA NA NA
hi in the search below I need to excluse the results when instance=_total   index="perfmon-fr" | fields %_User_Time host instance | stats avg(%_User_Time) as "%_User_Time" by host instance   ho... See more...
hi in the search below I need to excluse the results when instance=_total   index="perfmon-fr" | fields %_User_Time host instance | stats avg(%_User_Time) as "%_User_Time" by host instance   how to do this please?? 
Hi All i have result in the below format : "From abc customerId YETNAKCNK, operation create,consumedUnits 0" "From abc customerId YETNAKCNJ, operation update,consumedUnits 2" I have to convert t... See more...
Hi All i have result in the below format : "From abc customerId YETNAKCNK, operation create,consumedUnits 0" "From abc customerId YETNAKCNJ, operation update,consumedUnits 2" I have to convert the below data to the following format :  customerId               operation           consumedUnits YETNAKCNK.             create                           0 YETNAKCNJ               update                          2
Hello. I'm trying to understand something. I have a monitor input reading a file from a tk10x logger (a part of OpenGTS). It logs some data beginning with a line containing "Recv:" string and follo... See more...
Hello. I'm trying to understand something. I have a monitor input reading a file from a tk10x logger (a part of OpenGTS). It logs some data beginning with a line containing "Recv:" string and following that with some data parsed by the receiver from a network-received frame. I have therefore a sourcetype defined with: [opengts-events] BREAK_ONLY_BEFORE = Recv: BREAK_ONLY_BEFORE_DATE = true DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom disabled = false pulldown_type = 1 (then come some extracts but they are not important here). To be honest, I'm not sure why I have BREAK_ONLY_BEFORE_DATE set (I configured the sourcetype from the webui). But for now the sourcetype looks like that. With this setup I'd expect the input to split only on lines containing "Recv:" string. And for most of the time it does so. But also there are some breaks in other places. For example, this excerpt from log: (event data from few hours earlier) [INFO_|03/03 00:19:33|ServerSocketThread$ServerSessionThread.handleClientSession:2895] (ClientSession_000) Remote client port: /1.2.3.4:15352 [to /1.2.3.4:31272] [INFO_|03/03 00:19:33|AbstractClientPacketHandler.printSessionStart:273] Begin TCP session (ClientSession_000): 1.2.3.4 [INFO_|03/03 00:19:33|TrackClientPacketHandler.getHandlePacket:451] Recv: ##,imei:12345678,A [INFO_|03/03 00:19:33|TrackClientPacketHandler.getHandlePacket:461] TK103-2 Header: ##,imei:12345678,A [INFO_|03/03 00:19:33|TrackClientPacketHandler.getHandlePacket:478] Logon ACK: LOAD [INFO_|03/03 00:19:33|ServerSocketThread$ServerSessionThread.handleClientSession:3004] (ClientSession_000) TCP Resp Asc: LOAD (next event starting with line containing Recv:) This log gets split properly at the line containing "Recv: ##,imei:12345678,A", but: 1. First two lines are getting extracted as an event separate from the earlier data even though it doesn't start with a line including "Recv:" (as I wrote, the last data written before that was from few hours earlier). Why? Does the input monitor just have some timeout that it hits and therefore flushes the buffer? 2. If I sort the events by _time in search, I get the "later" event (the last four lines from the excerpt shown above) shown earlier than the one consiting of the first two lines. Strange.  
Our ML team use the API to export large numbers of events for model training. They are hitting limits: [searchresults] maxresultrows and [subsearch] maxout We do not want to increase these limits a... See more...
Our ML team use the API to export large numbers of events for model training. They are hitting limits: [searchresults] maxresultrows and [subsearch] maxout We do not want to increase these limits as they are global and, although we trust the ML team, we don't trust other users. Anyone else managed to solve this problem?  
Hi All, I am trying to restart the Splunk UF agent in my Linux server, but it is throwing the following error. "Removing stale pid file... Can't unlink pid file "/opt/ca/splunk/splunkforwarder/var/... See more...
Hi All, I am trying to restart the Splunk UF agent in my Linux server, but it is throwing the following error. "Removing stale pid file... Can't unlink pid file "/opt/ca/splunk/splunkforwarder/var/run/splunk/splunkd.pid": Read-only file system" My splunkd.pid file permission set as following: -rw-r----- splunk splunk splunkd.pid The agent was not running since 2 weeks and I did not make any changes lately. I have no clue why it is throwing that error. Could someone please help me to fix this issue. Thanks in advance.
How to convert tabular data to distinct count Hi, I have a splunk query | stats count by operation (under field operation we have activate and deactivate count) How to convert it to distinct ... See more...
How to convert tabular data to distinct count Hi, I have a splunk query | stats count by operation (under field operation we have activate and deactivate count) How to convert it to distinct count (instead of tabular format I want only the count to be displayed)  
Hi We have Cisco ISE that sends log to our Splunk using rsyslog as a receiver for TCP Syslog. Problem are that some of the message from ISE pics up using LLDP information from our switchs and acces... See more...
Hi We have Cisco ISE that sends log to our Splunk using rsyslog as a receiver for TCP Syslog. Problem are that some of the message from ISE pics up using LLDP information from our switchs and accesspoint these line and add them to the syslog message. This info can bee seen on the device as well with show version.   Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2018 by Cisco Systems, Inc. Compiled Mon 10-Dec-18 11:34 by mcpre   When this is added to the ISE log message it contains new line, so the log message are broken up when rsyslog writs it to the disk for Splunk to read. 1st part   <181> CISE_RADIUS_Accounting 0015021690 1 0 2021-03-01 09:36:46.766 +01:00 0376002501 3002 NOTICE Radius-Accounting: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx AP Software\, ap3g3-k9w8 Version: 8.10.130.0\   2 end part   <13> Support: http://www.cisco.com/techsupport\   3rd part   <13> (c) 1986-2020 by Cisco Systems\, Inc.\   4th part   <13> Wed Jul 29 00:28:31 PDT 2020 by aut, cisco-av-pair=lldp-tlv=lldpSystemName=SW-14, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Network Device Profile=Cisco, Location=Location#All Locations#OTT#StokholmSH, Device Type=Device Type#All Device Types#Switch#DynamiskNett, #015   I can join these message using transaction but not a good solution.   <181> CISE_RADIUS_Accounting 0015021690 1 0 2020-03-01 09:36:46.766 +01:00 0376002501 3002 NOTICE Radius-Accounting: RADIUS Accounting watchdog update, ConfigVersionId=261, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, cisco-av-pair=lldp-tlv=lldpSystemDescription=Cisco AP Software\, ap3g3-k9w8 Version: 8.10.130.0\ <13> Support: http://www.cisco.com/techsupport\ <13> (c) 1986-2020 by Cisco Systems\, Inc.\ <13> Wed Jul 29 00:28:31 PDT 2020 by aut, cisco-av-pair=lldp-tlv=lldpSystemName=sw-14, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Type=Device Type#All Device Types#Switch#DynamiskNett, #015   What I can see from this is that each message that comes from ISE are separated by new line. ISE escapes the newline from code within the message, but rsyslog ignore the escape \ From message above you see that all message that ends with \ should continue and #015 is the final end of the message. Can I somehow change the rsyslog to replace escape newline with comma or other character ? Or do I need to receive the data using phyton, change the escape newline and send it to syslog?
Hi all. I recently came in a discussion with my fellow colleages about disk usage. Assuming we have 100GB/day on a cluster with RF 15 and SF 11, the amount of disk (total per day) would be: Raw da... See more...
Hi all. I recently came in a discussion with my fellow colleages about disk usage. Assuming we have 100GB/day on a cluster with RF 15 and SF 11, the amount of disk (total per day) would be: Raw data: 15RF * 15% of 100Gb (15Gb) = 225Gb Tsidx: 11SF * 35% of 100Gb (35Gb) = 385 Gb TOTAL: 610Gb If I want to reduce disk consumption as much as I could. What would you reduce, RF or SF? Please provide some explanation, as my initial answer is reduce SF but my colleages stand about reducing RF.   Thanks in advance. 
Hi, I want to color the filename value (.i.e Account) with red color , if the value present in another fields is blank. How can i do? preferably  using xml code.... filename application I... See more...
Hi, I want to color the filename value (.i.e Account) with red color , if the value present in another fields is blank. How can i do? preferably  using xml code.... filename application ID Status Account       Account1 spear Ydg123 p
At the beginning of this month, the DHCP servers have stopped feeding logs into my splunk instance. Everyday at around 12AM local time, there will only be one log entry and it only shows the "Micros... See more...
At the beginning of this month, the DHCP servers have stopped feeding logs into my splunk instance. Everyday at around 12AM local time, there will only be one log entry and it only shows the "Microsoft Windows DHCP Service Activity Log" header and the codes. There are extracted from the corresponding day's DHCP log file. but the DHCP logs that follows after that did not appear in the splunk instance.   Here is the inputs.conf which is added into the DHCP servers (installed with UF) [monitor://$WINDIR\System32\DHCP] disabled = 0 whitelist = DhcpSrvLog* alwaysOpenFile = 1 crcSalt = <SOURCE> sourcetype = DhcpSrvLog index = windows