All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need to upgrade a Search Head Cluster from 7.3.4 to 8.1.9 and I have run the first two commands: splunk upgrade-init shcluster-members splunk edit shcluster-config -manual_detention on We are... See more...
I need to upgrade a Search Head Cluster from 7.3.4 to 8.1.9 and I have run the first two commands: splunk upgrade-init shcluster-members splunk edit shcluster-config -manual_detention on We are monitoring the active searches using the following command: splunk list shcluster-member-info | grep "active" And we see: active_historical_search_count:1 active_realtime_search_count:0 And it seemed to never reduce down to 0 for the active_historical_search_count, but after 90 minutes, it seems to have come down to 0. We checked the currently-running searches and found some new searches running on the detention server after 1 hour. We have the following set in the server.conf: decommission_force_finish_idle_time = 0 decommission_node_force_timeout = 300 decommission_search_jobs_wait_secs = 180 ...so why is it taking 90 minutes to stop running savedsearches? We did find some savedsearches that were running for long times and we fixed them, but should not all new searches be moved to another server once it is in manual detention? What can I do to fix this, so that my SHC can be upgraded?
Hey Team, I have some 150+ ip addresses in CIDR format (IE 96.24.0.0/16, etc) , i am getting my search result with one values coming as  dst_ip 96.24.123.123.  i need to filter out this event. so b... See more...
Hey Team, I have some 150+ ip addresses in CIDR format (IE 96.24.0.0/16, etc) , i am getting my search result with one values coming as  dst_ip 96.24.123.123.  i need to filter out this event. so basically if it would be one,, i can simply do in my SPL dst_ip!= (96.24.0.0/16) or NOT dst_ip IN ((96.24.0.0/16),  but i have around 150+ cidr that i need to filter out. i tried to add them into lookup file and it seems cidr in lookfile is not working. can someo
Hello, I have 2 CSVs in my splunk: Alert.csv having below columns and data: Alert_Header   Alert_type   Date JNA/athena_VICTORIA_Load [FAILED]   Autosys   01/03/2022 JNA/athena_VICTORIA_Sta... See more...
Hello, I have 2 CSVs in my splunk: Alert.csv having below columns and data: Alert_Header   Alert_type   Date JNA/athena_VICTORIA_Load [FAILED]   Autosys   01/03/2022 JNA/athena_VICTORIA_Staging [MAXRUN]   Autosys   01/03/2022 JNA/athena_MAIN_Load [FAILED]   Autosys   01/03/2022 JNA/athena_OLTP_Staging [MAXRUN]   Autosys   01/03/2022 NYP02000 has high_cpu   DATABASE   01/03/2022   Mapping.csv having below columns and data: Alert_Type   Inclusion   Exclusion   Header Autosys   athena   VICTORIA   ATHENA-Jobs Autosys   VICTORIA   NONE   VICTORIA-Jobs Database   high_cpu   NONE   CPU alerts   Output required: Alert_Header   Alert_type   Date   Header JNA/athena_VICTORIA_Load [FAILED]   Autosys   01/03/2022   VICTORIA-Jobs JNA/athena_VICTORIA_Staging [MAXRUN]   Autosys   01/03/2022   VICTORIA-Jobs JNA/athena_MAIN_Load [FAILED]   Autosys   01/03/2022   ATHENA-Jobs JNA/athena_OLTP_Staging [MAXRUN]   Autosys   01/03/2022   ATHENA-Jobs NYP02000 has high_cpu   DATABASE   01/03/2022   CPU alerts   Logic: Mapping file is for looking up pattern in the Alert.csv. So if any Alert_Header which has "athena"(mentioned in Inclusion) but doesn't have "Victoria" (mentioned in Exclusion) keyword in it will be termed as "ATHENA-Jobs" . Similarly if any Alert_Header which has "Victoria"(mentioned in Inclusion) only will be termed as "VICTORIA-Jobs". None in Exclusion column would mean there is no exclusion pattern to be searched in Alert_Header.   Can you please help with this query
As shown below I have only two events present on my index But when i execute the below search query index = **** |rex field=_raw "(?msi)(?<json_field>\{.+\}$)" | spath input=json_field |ren... See more...
As shown below I have only two events present on my index But when i execute the below search query index = **** |rex field=_raw "(?msi)(?<json_field>\{.+\}$)" | spath input=json_field |rename SCMSplunkLog.SCMFailureLog.appName as APPNAME,SCMSplunkLog.SCMFailureLog.eventType as EVENTTYPE,SCMSplunkLog.SCMFailureLog.payload.level as LEVEL,SCMSplunkLog.SCMFailureLog.payload.errorDescription as ERRORDESCRIPTION,SCMSplunkLog.SCMFailureLog.payload.startTime as STARTDATE,SCMSplunkLog.SCMFailureLog.payload.endTime as ENDDATE |where APPNAME!="" and LEVEL="ERROR"|table APPNAME,EVENTTYPE,STARTDATE,ENDDATE,LEVEL,ERRORDESCRIPTION I was getting duplicate entries on result table as below Can anyone please help me with this. Edited: Attached sample json: { "SCMSplunkLog" : { "SCMFailureLog" : { "appName" : "Testing_splunk_alerts_log", "eventType" : "Testing_splunk_alerts_log", "payload" : { "level" : "ERROR", "startTime" : "2022-04-12T13:57:49.156Z", "successCount" : 0, "failureCount" : 0, "publishedCount" : 0, "errorCode" : 0, "errorDescription" : "ERROR: relation \"test.testLand\" does not exist\n Position: 8", "sourceCount" : 0, "endTime" : "2022-04-12T13:57:54.483Z" } } } }  
Hello, I am new to splunk. I am trying to run a report to show what servers our users connect to and on what ports.  the report is going to be reviewed by non IT business unit so it will be nice if t... See more...
Hello, I am new to splunk. I am trying to run a report to show what servers our users connect to and on what ports.  the report is going to be reviewed by non IT business unit so it will be nice if the report can have a description to the ports (example: NetBIOS, SQL, LDAP, etc).  Is there a way to do that?  Thank you in advance!
Hi, We need to export login events from Windows and Linux servers from Splunk Cloud Platform to another system for further analysis. In on-prem deployments, we were able to use a forwarder to expor... See more...
Hi, We need to export login events from Windows and Linux servers from Splunk Cloud Platform to another system for further analysis. In on-prem deployments, we were able to use a forwarder to export syslog data over a TLS connection. Is this an option also in Splunk Cloud? We see there's an option to use a REST API to get data from Splunk Cloud, but is it practical when we are talking about a large amount of data, all the time? We need to get the data within a few seconds and we are talking about a large number of server, so not sure that polling with REST API is the way to go. Alternatively, are there other ways? Maybe cloud native ways like exporting to AWS CloudWatch or Kinesis streams? Thanks, Gabriel
Hello, I am using a scheduled report to fill a summary index. The report is supposed to work with indextime and process everything, that came new within the last hour. Therefore I configured the t... See more...
Hello, I am using a scheduled report to fill a summary index. The report is supposed to work with indextime and process everything, that came new within the last hour. Therefore I configured the timerange the following way: But somehow the scheduled report restricts the event time to the last 24 hours, which can yield to no search results in a case, that indexed data is from the day before yesterday: Does someone know, why this restriction is happening, although the event time is supposed to be unrestricted? Thanks and best regards
I am trying to create a dashboard which shows % availability over a set period of time. I am trying to calculate all calls - 5xx failures - 400 failures. However, I am not sure if 400 failures are ... See more...
I am trying to create a dashboard which shows % availability over a set period of time. I am trying to calculate all calls - 5xx failures - 400 failures. However, I am not sure if 400 failures are also being counted in the successful call line and if other 4xx failures are included in the fourHundredFail line. Is the below the correct way to calculate this? Thank you for your help! vhost="mainbrand" | eval successfulCall=if('httpstatus'=200 OR 'httpstatus'=201 OR 'httpstatus'=204 OR 'httpstatus'=401 OR 'httpstatus'=403 OR'httpstatus'=404 OR 'httpstatus'=422 OR 'httpstatus'=429,1,0) | eval fourHundredFail=if('httpstatus'=400, 1,0) | eval technicalFail=if(match(substr('httpstatus',1,1),"5") ,1,0) | eval totalSuccesfulCalls = successfulCall-fourHundredFail-technicalFail | stats sum(successfulCall) as "2xx_or_4xx_Calls" sum(fourHundredFail) as "400_Failures" sum(technicalFail) as "5xx_Failures" sum(totalSuccesfulCalls) as "Total_Successful_Calls" by vhost | eval percentageAvailability=(('Total_Successful_Calls'/'2xx_or_4xx_Calls')*100) | eval percentageAvailability=round('percentageAvailability', 2) | table vhost, "2xx_or_4xx_Calls","400_Failures", "5xx_Failures", "Total_Successful_Calls", percentageAvailability | appendpipe [stats avg(percentageAvailability) as averagePercentage] | eval averagePercentage=round('averagePercentage', 2) | sort by "percentageAvailability" asc
I have a fairly large(3,400 records) search result that randomly contains non-ascii characters in any one of the 20 fields. This normally is not an issue but the sendemail.py script that is used by s... See more...
I have a fairly large(3,400 records) search result that randomly contains non-ascii characters in any one of the 20 fields. This normally is not an issue but the sendemail.py script that is used by splunks email alerting system is erroring out because of it. Is there a way to remove non-ascii characters from all search results? See below for search example:     | inputlookup StatusReport.csv | fields Name ID BusinessGroup Class Email Issues Comments Dynamic Status Owner HoB | sort ID  
We're running Splunk 8.2.2 with the Microsoft Azure Add-on version 3.1.1.  We have the add-on installed on a heavy forwarder. Most of the time the logs pull fine but every 1-2 months they stop pull... See more...
We're running Splunk 8.2.2 with the Microsoft Azure Add-on version 3.1.1.  We have the add-on installed on a heavy forwarder. Most of the time the logs pull fine but every 1-2 months they stop pulling.  If we let it sit for ~8 hours it eventually starts working again.  If we disable then re-enable the pull it starts working right away.     What steps should we take next time this happens to troubleshoot why this is occurring? 
Hello everyone! I have in search results table like A=1, B=1, C=3 I have lookup like Type A B C server1 1 1 4 server2 1 1 5 server3 1 1 6 ... See more...
Hello everyone! I have in search results table like A=1, B=1, C=3 I have lookup like Type A B C server1 1 1 4 server2 1 1 5 server3 1 1 6   I need to make search, that will compare results from my search with lookup and if enough one value in appropriate column is equal, than column is true, if not - false. For example, A=1, server1, server2 and server3 = 1 in column A, i need result A=true. B - same. But in column C there is no "3" so C is false. Help me please.
Hi, I am looking to upgrade the AppDynamics .Net agent to the latest version on the Windows server during the windows patching activity. There should not be any downtime to restart IIS. So I shoul... See more...
Hi, I am looking to upgrade the AppDynamics .Net agent to the latest version on the Windows server during the windows patching activity. There should not be any downtime to restart IIS. So I should configure all the parameters before the restart time. Please provide us with steps to help do this. Regards, Sushma-
Can we create Splunk Dashboard using python Scripting in Splunk Cloud, if Yes, what is process for the same.
Please checkout the idea here (because I don't think currently it's possible with Splunk unless someone has some workaround or solution that I don't know) - https://ideas.splunk.com/ideas/EID-I-1417 ... See more...
Please checkout the idea here (because I don't think currently it's possible with Splunk unless someone has some workaround or solution that I don't know) - https://ideas.splunk.com/ideas/EID-I-1417   (Coping the same content here, recommend upvoting the idea if you think this is currently not possible with Splunk today.) Does anyone know if it is possible to add metadata field(s) to identify all the Splunk instances that have processed a particular event? Let me explain, for example, I'm collecting WinEventLog from instance1 using UF, which is forwarding the logs to an instance2 which is intermediate UF, that is forwarding to intermediate HF (instance3), which is forwarding the data to Indexer (idx1). instance1 (UF) -> instance2 (I UF) -> instance3 (I HF) -> idx1 (Indexer) I want to see if there is a way to get a meta field (indexed time field) that tells the full sequence of where a particular event has traveled through (only Splunk instances of course). This would be useful in complex environment troubleshooting. Even having this as part of debugging we can enable some parameters that can enable this functionality. I don't think currently it's possible unless someone has some workaround or solution.
Have anyone tried sending data from HF to UF? I know it's a stupid question.    And I know it's not going to work. If someone has tried it before intentionally or by mistake, then I'm curious to kn... See more...
Have anyone tried sending data from HF to UF? I know it's a stupid question.    And I know it's not going to work. If someone has tried it before intentionally or by mistake, then I'm curious to know what happens in this scenario, what error HF will throw, and what error UF will throw.  
Hi, I have a dashboard and I need to limit the view of this dashboard to people with certain IP addresses. Is this possible and, if yes, how? Thanks, Patrick
Hello, I am collecting logs from various endpoints via UFs into a Splunk HF. One of the data inputs is firewall logs via Syslog on port 514.  My question is: Will I have to set Data Input on ... See more...
Hello, I am collecting logs from various endpoints via UFs into a Splunk HF. One of the data inputs is firewall logs via Syslog on port 514.  My question is: Will I have to set Data Input on port 514 on Splunk Cloud too? Or all logs will be globally forwarded on port 9997 which is already set? Thanks!
Hello Guys. We use SNMP Modular input to poll data from the devices. We use CISCO, added CISCO MIBs, then added IF-Mib. And after adding IF-MIB file on heavy forwarder in smp_ta folder, on ad hoc ... See more...
Hello Guys. We use SNMP Modular input to poll data from the devices. We use CISCO, added CISCO MIBs, then added IF-Mib. And after adding IF-MIB file on heavy forwarder in smp_ta folder, on ad hoc search head  we see only 6 fields from IF-MIB (ifDescr_1,... ifDescr_6). When we do snmpwalk from heavy forwarder to the device, we see 87 fields with interfaces names. Seems there will be limits.conf, but maybe you had the  same situation ?
Hi All,   I have two sourcetypes in the same index, however the fields names are different but the value is same for the Email address of a user .   But yet when i do a coalesce or use |where clause... See more...
Hi All,   I have two sourcetypes in the same index, however the fields names are different but the value is same for the Email address of a user .   But yet when i do a coalesce or use |where clause,  splunk shows "No results found"  For example: Sourcetype s1 contains email field while s2 contains user_email field. Both fields have same value:  john_smith@domain.com   index=xx (sourcetype=s1 OR sourcetype=s2) (email=* OR user_email=*) | eval user_id = coalesce(email, user_email) OR | index=xx (sourcetype=s1 OR sourcetype=s2) | where email=user_email   Result:  No results found. I am following whatever is  mentioned in https://community.splunk.com/t5/Splunk-Search/merge-two-sourcetypes-that-have-the-same-data-but-different/m-p/493244,  but yet in my case it shows 0 Result matches. Any idea what can be the issue ?  Is the @ sign or "." (dot) in the email id creating a problem ?
After Running the Query there is no option for Timestamp > Choose Column >  "No result" Kindly see Image.  Since I'm going to use the timestamp column well "Timestamp" currently beginn... See more...
After Running the Query there is no option for Timestamp > Choose Column >  "No result" Kindly see Image.  Since I'm going to use the timestamp column well "Timestamp" currently beginner here