All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I have a DNS logs with Parenthesis + numbers instead of Dots in the URL filed.  How can I replace them with a Dots?  Below are some examples from the logs.    (5)_ldap(4)_tcp(5)cmp(6)_sites... See more...
Hi,  I have a DNS logs with Parenthesis + numbers instead of Dots in the URL filed.  How can I replace them with a Dots?  Below are some examples from the logs.    (5)_ldap(4)_tcp(5)cmp(6)_sites(3)rub(3)net(2)oz(0) (4)wpad(3)rub(3)net(0) (5)_ldap(4)_tcp(2)dc(6)_msdcs(9)dc(7)core(2)t4(3)rub(3)net(0)    Thank you!
Hi Chaps, We are having an issue where Searches are delayed at SHC Captain following upgrade from 7x to 8x. There are verity of errors where some related to Artifact Replication, some on SHCMaster/... See more...
Hi Chaps, We are having an issue where Searches are delayed at SHC Captain following upgrade from 7x to 8x. There are verity of errors where some related to Artifact Replication, some on SHCMaster/SLave, HttpListener, etc.. Note: replication and all other require ports are open at all instances.... SHCSlave: 06-18-2020 12:31:44.413 +0100 INFO SHCSlave - event=SHPSlave::handleReplicationError aid=scheduler__admin__nmon__RMD51d5d480c7c4e780c_at_1592479800_2330_644D578C-F001-4711-B459-2338E22DF399 src=644D578C-F001-4711-B459-2338E22DF399 tgt=638683B3-25D9-4D2A-AF2E-4E43362FDBFA failing=644D578C-F001-4711-B459-2338E22DF399 queued replication error job   SHCMaster: 06-18-2020 12:48:04.200 +0100 INFO SHCMaster - event=SHPMaster::handleReplicationError replication error src=644D578C-F001-4711-B459-2338E22DF399 tgt=638683B3-25D9-4D2A-AF2E-4E43362FDBFA failing=src aid=scheduler_c3ZjX3N1bW1hcmlzZXI_ZWVfc2VhcmNoX2NhbA__RMD579a1d02cbad79018_at_1592480820_2426_644D578C-F001-4711-B459-2338E22DF399   DispatchManager: This component has highest number of events and has no events before up- gradataion 06-18-2020 12:59:41.537 +0100 WARN DispatchManager - enforceQuotas: username="apasha", search_id="apasha__apasha_ZWVfc2VhcmNoX3BjcmY__search3_1592481563.5052_644D578C-F001-4711-B459-2338E22DF399" - QUEUED reason="The maximum number of concurrent historical searches for this user based on their role quota has been reached.", concurrency_limit="2" The above log looks like an Ad-hoc search and it says, QUEUED. what would be the reason? any help is highly appreciated... Thanks. see below logs too....   ArtifactReplicator: 06-18-2020 12:32:41.201 +0100 WARN ArtifactReplicator - event=artifactReplicationFailed type=ReplicationFiles files="/opt/splunk/var/run/splunk/dispatch/_splunktemps/send/s2s/scheduler_c3ZjX3N1bW1hcmlzZXI_ZWVfc2VhcmNoX2NhbA__RMD527b16720760c2872_at_1592479740_2005_638683B3-25D9-4D2A-AF2E-4E43362FDBFA-644D578C-F001-4711-B459-2338E22DF399.tar" guid=644D578C-F001-4711-B459-2338E22DF399 host=10.164.196.166 s2sport=8999 aid=4229. Connection failed host = searchcr01source = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd 6/18/20 12:32:41.200 PM 06-18-2020 12:32:41.200 +0100 WARN ArtifactReplicator - Connection failed host = searchcr01source = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd 6/18/20 12:32:41.200 PM 06-18-2020 12:32:41.200 +0100 WARN ArtifactReplicator - Replication connection to ip=10.164.196.166:8999 timed out @gcusello  - Can you please help me.. Regards, Pramodh
Hey everyone. I have never tried creating event annotation before so i am not able to grasp it properly.  I want to show a line for both min and max values of a date field. For ex. Dates.          ... See more...
Hey everyone. I have never tried creating event annotation before so i am not able to grasp it properly.  I want to show a line for both min and max values of a date field. For ex. Dates.                                     Target 6/09/2020.                                X 6/10/2020.                                X          .                                                .          .                                                . 6/23/2020.                                 X So the min and max values i.e 6/09/2020 and 6/23/2020 of dates field should be shown as lines (event annotation) . For now the dates in between shouldn't display but later i should be able to add data for any dates and it should show as line or area chart. And the event annotation should display target x and the date when we hover on that. All the examples i have seen of annotations are using timecharts . I want something like the mock image below. Any help would be great.Thanks.
Hello everyone I'm trying to build search for Pass the Hash. I've seen below article: https://blog.stealthbits.com/how-to-detect-pass-the-hash-attacks/ However in my environment there is no s... See more...
Hello everyone I'm trying to build search for Pass the Hash. I've seen below article: https://blog.stealthbits.com/how-to-detect-pass-the-hash-attacks/ However in my environment there is no sysmon so i made this: index=windows signature_id=4624 Logon_Type=9 Logon_Process=seclogo |transaction host endswith="EventCode=4672" maxevents=10 I'm not sure if I used transaction query in proper way.  Thanks for suggestions!
Hello, My company is one of Splunk partners, and our security team has several simple questions regarding Splunk Enterprise security. I've tried to get answers to the questions using email support@... See more...
Hello, My company is one of Splunk partners, and our security team has several simple questions regarding Splunk Enterprise security. I've tried to get answers to the questions using email support@splunk.com but unfortunately it is not supported anymore. Other than that I was trying to get questions answered using Splunk support line: +14158488400, no success as well. Could someone please provide me a way to contact support and get answers to the questions? Would be great if I can call someone or write an email. Thanks!
I see below error message in my search head cluster . Can some one please assist on this? 06-18-2020 12:28:05.026 +0100 WARN ArtifactReplicator - Replication connection to ip=XXXX:8999 timed out 06... See more...
I see below error message in my search head cluster . Can some one please assist on this? 06-18-2020 12:28:05.026 +0100 WARN ArtifactReplicator - Replication connection to ip=XXXX:8999 timed out 06-18-2020 12:28:05.026 +0100 WARN ArtifactReplicator - Connection failed 06-18-2020 12:28:05.026 +0100 WARN ArtifactReplicator - event=artifactReplicationFailed type=ReplicationFiles files="/opt/splunk/var/run/splunk/dispatch/_splunktemps/send/s2s/scheduler_c3ZjX3N1bW1hcmlzZXI_ZWVfc2VhcmNoX2V4cG9zdXJlbGF5ZXI__RMD5ac8aa10718becb19_at_1592479620_2301_644D578C-F001-4711-B459-2338E22DF399-638683B3-25D9-4D2A-AF2E-4E43362FDBFA.tar" guid=638683B3-25D9-4D2A-AF2E-4E43362FDBFA host=XXXX s2sport=8999 aid=4733. Connection failed
i want to find out the time for which a host was down , please share the query to check the same.   Thanks in advance
Our logs will have urls logged in the below manner: /v1/customers/1/sites?includeContacts=True&showOnlyPrimarySites=True&purpose=Billing&pageNumber=1&pageSize=10 These query string params have defa... See more...
Our logs will have urls logged in the below manner: /v1/customers/1/sites?includeContacts=True&showOnlyPrimarySites=True&purpose=Billing&pageNumber=1&pageSize=10 These query string params have default values in the API, so they may not all be present in each of the log entries. https://regex101.com/r/5Ynk4f/1 This is what I've got so far. I need to write in a tabular format: includeContacts showOnlyPrimarySites purpose count true true billing 30 false false   50 Thanks Arun
Events are not getting generated after the date 15th June, 2019 for the following query. index=webmethods_prd sourcetype="webmethods:wmerror"   However, events are getting generated for the dates ... See more...
Events are not getting generated after the date 15th June, 2019 for the following query. index=webmethods_prd sourcetype="webmethods:wmerror"   However, events are getting generated for the dates before 15th June,2019.  User needs the events to be generated for the dates  after 15th June, 2019 as well. What could be the problem?
Our organization has Splunk Security Essentials app and our end goal is to map the data source to the MITRE Framework The issue is that the live data which was added manually in the data inventory t... See more...
Our organization has Splunk Security Essentials app and our end goal is to map the data source to the MITRE Framework The issue is that the live data which was added manually in the data inventory tab is not showing up in data source check. Please find the screenshot below.     I have added them manually when it was not auto detected. We made a custom entry assigning the index and source types.  But it failing to show the live data but the data actually exists Any leads on this would be helpful.
We had our Splunk server stopping by itself two days in a row. I am trying to find the reason but I cannot find anything related in /opt/splunk/var/log/splunk. Could someone please advise where I s... See more...
We had our Splunk server stopping by itself two days in a row. I am trying to find the reason but I cannot find anything related in /opt/splunk/var/log/splunk. Could someone please advise where I should be looking for the related logs?
Hi Splunkers, hope you guys are all well. I'm trying to do an adaptation of the search in this post (thanks to @elliotproebstel and @javiergn !) https://community.splunk.com/t5/Getting-Data-In/How-... See more...
Hi Splunkers, hope you guys are all well. I'm trying to do an adaptation of the search in this post (thanks to @elliotproebstel and @javiergn !) https://community.splunk.com/t5/Getting-Data-In/How-to-calculate-total-Business-hours-in-between-weekend-days/td-p/304838 I'm working in UTC and in my case I'm interested in counting the the hours between 1 PM and 1 AM (next day). It works great for other teams where hours are in the same day, but I'm finding the next day tricky. This is what I have so far: | eval start=strptime(reported_time,"%b %d %Y %H:%M:%S") | eval end=strptime(processed_time,"%b %d %Y %H:%M:%S") | eval minute = mvrange(0, (end - start), 60) | mvexpand minute | eval _time = start + minute | eval myHour = strftime(_time,"%H") | eval myMinute = strftime(_time,"%H") | eval myDay = strftime(_time,"%A") | eval myMonth = strftime(_time,"%b") | where myDay != "Saturday" AND myDay != "Sunday" AND myHour >= 13 AND myHour <=1 | stats count as durationInMinutes by ticket,reported_time,processed_time | eval duration = tostring(durationInMinutes*60, "duration") | eval SLO=if(durationInMinutes>60,"SLO Fail","SLO Achieved") | table ticket,reported_time,processed_time,duration,SLO | sort by - duration I want my table to show: ticket number, reported time (when it was reported), processed time (when it got worked by an engineer), duration (time between reported time and processed time, counting only hours between 1 PM and 1 AM next day) and whether the SLO was met or not. Thanks for the help!!   Wheresmydata.
I just moved the node from one application to another application in AppD and I had made the changes in the app agent and machine agent controler.xml. I'm still not seeing the historic data in the ne... See more...
I just moved the node from one application to another application in AppD and I had made the changes in the app agent and machine agent controler.xml. I'm still not seeing the historic data in the new node. Can someone help me out? Thanks in advance.
Hi, I use a scheduled search in order to generate a CSV lookup automatically:   patch       | table Computer Site OSVersion | rename Computer as host | outputlookup host.csv     B... See more...
Hi, I use a scheduled search in order to generate a CSV lookup automatically:   patch       | table Computer Site OSVersion | rename Computer as host | outputlookup host.csv     But on the first line of the CSV, I need to display the 3 fields on the header like host, site, and OS version. If I add these fields in the CSV before running the search, I would like to know if these fields are going to be deleted when the search is finished? Thanks.
Can i share the output of a lookup command to one search head to another search head.  This is my Setup for this testing. 2 Search head, 1Cluster Master, and 1 Indexer Search Heads are not conn... See more...
Can i share the output of a lookup command to one search head to another search head.  This is my Setup for this testing. 2 Search head, 1Cluster Master, and 1 Indexer Search Heads are not connected to each other.  Scenario :  I will run |outputlookup command on the SH1, and  the client want is to share that result to SH2. I am currently looking at KVStore - i have setup it up (replication=true on collection.conf) , but the lookup is not  showing on the SH2.  
Are there any disadvantages of installing Windows Infra app on the ES search head if the SH has 32Gb ram and 24 CPU ?
Hello, I am creating a panel on dashboard which shows SLA % for different teams and each of these teams have different SLA target. I have got a search ready to get the required data (i.e., Team, Act... See more...
Hello, I am creating a panel on dashboard which shows SLA % for different teams and each of these teams have different SLA target. I have got a search ready to get the required data (i.e., Team, Actual SLA, Target SLA). I am using Number Display Viz (Spinner - Big Dash), with Team as Title, Actual SLA as Value and Target SLA as Subtitle. I can format the threshold coloring from "General" tab based on Value (Actual SLA) but how to format threshold coloring in such a way that if Actual SLA is less than Target SLA, Spinner should be Red and if Actual SLA is greater than Target SLA, Spinner should be Green? Thank you. Madhav    
Here we are using a PowerShell script to extract the data from the AD subnets from a windows server This is scheduled to run once every week to pull the list of subnets on a particular root domain ... See more...
Here we are using a PowerShell script to extract the data from the AD subnets from a windows server This is scheduled to run once every week to pull the list of subnets on a particular root domain We are supposed to get the results only from a parent domain, instead we are getting the results from the subdomains which is not the expected result. Also when we run the Script in PowerShell ISE directly we are getting a list of subnets from a domain , but the result we see in splunk is entirely different from different domains than the results we get while running the script locally Since the results contains the subnets and IP Address we are unable to attach the screenshot due to the sensitivity of the results. Script used “ # extract domain rootDNS $domain=$([adsi] "LDAP://RootDSE").Get("rootDomainNamingContext") -replace "DC=","" -replace ",","." # create LDAP path for AD subnets $subnetsDN="LDAP://CN=Subnets,CN=Sites," + $([adsi] "LDAP://RootDSE").Get("ConfigurationNamingContext") #ouput properties $props = @{domain=$domain;ip="";sitename=""} # get subnets and loop through list foreach ($subnetDN in $([adsi] $subnetsDN).psbase.children){ $tmpTable = new-object psobject -property $props # get CN of the subnet's site objects $tmpTable.sitename=$([adsi] "LDAP://$($subnetDN.siteObject)").cn.tostring() # validate sitename if($tmpTable.sitename -ne $null){ #extract subnet CN $tmpTable.ip=$subnetDN.cn.tostring() #write output $tmpTable |select domain, ip, sitename } } Input Stanza: [powershell://Get-ADSubnets] script = . "$SplunkHome\etc\apps\bhp_hf_ad_audit_powershell_subnet\bin\RunAudit-ADSubnets.ps1" schedule = 30 15 * * 0 sourcetype = microsoft:ad:subnets  
Hi all, I'm working on the automation for removing inactive applications from the controller. For this i'm checking the following: 1) application with tiers having zero(0) node count and no heal... See more...
Hi all, I'm working on the automation for removing inactive applications from the controller. For this i'm checking the following: 1) application with tiers having zero(0) node count and no health status(?) 2) applications with tiers having zero(0) node count and 0 calls per month I'm having issues finding the Health status of a node in a particular tier. This can easily found in the UI but I need this through API call.  can someone please help with this.  Thanks, Pradeep
Hi, i have inherited a splunk installation, done by a 3rd party.  We are currently using Splunk Enterprise version 8.0.2, with universal forwarders on a Solaris host (11.3) and 4 solaris zones on ... See more...
Hi, i have inherited a splunk installation, done by a 3rd party.  We are currently using Splunk Enterprise version 8.0.2, with universal forwarders on a Solaris host (11.3) and 4 solaris zones on that host.   We are experiencing very high memory consumption and CPU usage on the host and respective zones, but a restart of the splunk daemon usually resolves  the memory issues.  We are currently restarting the splunk daemon's every 4-5 days. When we do restart the splunk services, they jump to the top CPU users the moment it's started. I have read that the high CPU could be attributed to the number of files/directories being monitored, so I ran the "splunk list monitor"  command on each zone being monitored and on the host, and found that certain directories were being monitored across all forwarders, even if those directories didn't exist on that zone. I still don't know enough about splunk (am working through a pluralsight splunk fundamentals training course) to know whether  the list of files/directories to be monitored is being set at a zone/machine level or globally, and where I can go to find out. Any assistance in this regard would be greatly appreciated. thanks Mel