All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have been trying to get nmap output into Splunk. I thought the xml output would be nice and straightforward! Whilst the events are separated, the issue is having multiple values of the same field ... See more...
I have been trying to get nmap output into Splunk. I thought the xml output would be nice and straightforward! Whilst the events are separated, the issue is having multiple values of the same field in an event. automatic field extractions pick up the first <port... section but the rest are ignored. I tried using KV_MODE=xml but that didn't make a difference. I thought Splunk was quite happy pulling in multi values with xml but maybe its not quite the xml Splunk is expecting. I am on Splunk Cloud so cli changes are not an option. Any pointers appreciated! Thanks.      <host starttime="1633560153" endtime="1633560291"><status state="up" reason="timestamp-reply" reason_ttl="34"/> <address addr="81.123.123.123" addrtype="ipv4"/> <hostnames> <hostname name="host.name.resolved.com" type="PTR"/> </hostnames> <ports><port protocol="tcp" portid="21"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="ftp" method="table" conf="3"/></port> <port protocol="tcp" portid="22"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="ssh" method="table" conf="3"/></port> <port protocol="tcp" portid="23"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="telnet" method="table" conf="3"/></port> <port protocol="tcp" portid="80"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="http" method="table" conf="3"/></port> <port protocol="tcp" portid="443"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="https" method="table" conf="3"/></port> <port protocol="tcp" portid="8000"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="http-alt" method="table" conf="3"/></port> <port protocol="tcp" portid="8080"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="http-proxy" method="table" conf="3"/></port> <port protocol="tcp" portid="8888"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="sun-answerbook" method="table" conf="3"/></port> </ports> <times srtt="15851" rttvar="15851" to="100000"/> </host>    
I'm trying to figure out how to get the time difference between two events that use the same UUID. However, the second event doesn't always happen. How can I add to my query or alter it to show the t... See more...
I'm trying to figure out how to get the time difference between two events that use the same UUID. However, the second event doesn't always happen. How can I add to my query or alter it to show the time difference between event1 and event2 AND just make event2 NULL if it does not exist with the UUID. Here is what I have started that shows the UUID, time_start of event1, time_finish of event2 and difference. I think I might need to use the transaction command instead of stats but would like to avoid anything too resource intensive. index=* sourcetype=* eventCode="event1" serviceCallUUID=* | stats latest(_time) as time_start by serviceCallUUID | join serviceCallUUID [ search index=* sourcetype=* eventCode="event2" serviceCallUUID=* | stats latest(_time) as time_finish by serviceCallUUID ] | eval difference=time_finish-time_start | eval difference=strftime(difference,"%M:%S") | eval time_finish=strftime(time_finish,"%m-%d-%Y %H:%M:%S.%f") | eval time_start=strftime(time_start,"%m-%d-%Y %H:%M:%S.%f")   Thanks!
Alerts and Reports are both persisted at savedsearches.conf . How does the UI decide that a certain entry shall be displayed under the Alerts page (../app_name/alerts) ? At Question Report v.s. A... See more...
Alerts and Reports are both persisted at savedsearches.conf . How does the UI decide that a certain entry shall be displayed under the Alerts page (../app_name/alerts) ? At Question Report v.s. Alert, what's the difference?  this is mentioned in the second paragaraph as: "while we use Alert for a Search that will make a determination to take action in contacting the outside world via email or script execution if its results match a criteria." Question 1: What are the settings that make an entry show under Alerts (and not under Reports) ? Question 2: I want to deploy the alerts of my app with sole action of "Add to Triggered Alerts" (for which I do use the setting: alert.track = 1). No email, no script . Is this possible ?
udp7511 syslog transmission was set up on three firewalls. The same port is not registered on the splank web. I used the method below, but it failed. However, logs are sent when set to another port... See more...
udp7511 syslog transmission was set up on three firewalls. The same port is not registered on the splank web. I used the method below, but it failed. However, logs are sent when set to another port on the splunk web. /opt/splunk/etc/apps/search/local [udp://7511] connection_host = ip host = 192.168.10.10 index = fw1 source = fw1_source sourcetype = syslog [udp://7511] connection_host = ip host = 192.168.10.20 index = fw2 source = fw2_source sourcetype = syslog [udp://7511] connection_host = ip host = 192.168.10.30 index = fw3 source = fw3_source sourcetype = syslog
Our cloud instance was upgraded 3 days ago, since then the log volume increased by about 20% - we're logging same amount of data as for the past 3+ years - have anyone had to deal with similar issue?... See more...
Our cloud instance was upgraded 3 days ago, since then the log volume increased by about 20% - we're logging same amount of data as for the past 3+ years - have anyone had to deal with similar issue?    I looked at indexes - they all seem to increased logging proportionally, which leads me to believe it a change in the current version...that made it "less efficient" in this respect...   Thanks!
Hi, I created a new application for AppD for my system's UAT environment. These are Windows Server 2019 IIS web and application servers with the .NET agent. After installing the AppD .NET agent ... See more...
Hi, I created a new application for AppD for my system's UAT environment. These are Windows Server 2019 IIS web and application servers with the .NET agent. After installing the AppD .NET agent and having it running, it does report back to the controller, but we observed new errors in our application and the business process we are trying to run is failing with this: Full Error Message: System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel. If I disable AppD, then the process works and the TLS error goes away.   So AppD is actually causing an impact on our environment. Any ideas on how I can fix it?
Attached screenshot  is a list of 15 query ids with started, ended, bstarted (15 minute bucket) and query duration. Tried with concurrency cmd:       |where started <= "2021-09-20 15:45:00" AND... See more...
Attached screenshot  is a list of 15 query ids with started, ended, bstarted (15 minute bucket) and query duration. Tried with concurrency cmd:       |where started <= "2021-09-20 15:45:00" AND ended >= "2021-09-20 15:45:00"| eval bstarted = strptime(started, "%Y-%m-%d %H:%M:%S.%3N")|bin span=15m bstarted | eval bstarted = bstarted+900| where strptime(started, "%Y-%m-%d %H:%M:%S.%3N") <= bstarted and strptime(ended, "%Y-%m-%d %H:%M:%S.%3N") >= bstarted|eval bstarted=strftime(bstarted,"%F %T")|eval st=strptime(started, "%Y-%m-%d %H:%M:%S.%3N") | concurrency duration=query_duration start=st|table query_id started bstarted ended query_duration concurrency       surprised to see that the concurrency values are a sequence from 1 to 15? Does not quite make sense. 2 Tried with stats dc(query_id)  as concurrency_ct by bstarted  and did not get the desired results as per screenshot         "20210920_193943_02412_rhmkv","2021-09-20 15:39:43.236","2021-09-20 15:45:00","2021-09-20 15:57:57.829",1094, "20210920_192921_02318_rhmkv","2021-09-20 15:29:21.670","2021-09-20 15:30:00","2021-09-20 16:03:59.951",2078, "20210920_193005_02322_rhmkv","2021-09-20 15:30:06.022","2021-09-20 15:45:00","2021-09-20 15:46:33.583",987, "20210920_192946_02321_rhmkv","2021-09-20 15:29:46.959","2021-09-20 15:30:00","2021-09-20 16:13:08.574",2601,       There are 4 queries_duration > 900 seconds, desired concurrency_cnt are:       "2021-09-20 15:30:00",2 "2021-09-20 15:45:00",15 (2 more from the 2078 and 2601 seconds duration values) "2021-09-20 16:00:00",4 (4 more from the 2078 and 2601, 1094, 987 seconds duration values)       Was hoping concurrency command would give me the results. Not sure where is the mistake? Would appreciate some inputs, guidance. Thanks.      
Hi all, I'm currently trying to use splunk to create an alert for the following scenario: I have a search that tell's me the number os rows and partitions a data pipeline ingested, so basically i a... See more...
Hi all, I'm currently trying to use splunk to create an alert for the following scenario: I have a search that tell's me the number os rows and partitions a data pipeline ingested, so basically i already extract the following fields: - Table Name - Number of partitions - Number of rows I also have a dashboard that shows me the timechart of the number of partitions and rows across different executions during the time. What i need in this example, is to have an alert that get triggered when the number of the partitions or rows have more than a specified % of difference between executions. So in this example, the executions 1 and 2 have a low difference between then, but the execution 3 is clearly an outlier, that should be alerted.  Execution 1:  table_name = table_1 num_part     = 12 num_rows   = 1400 Execution 2: table_name = table_1 num_part = 10 num_rows = 1000 Execution 3: table_name = table_1 num_part = 10000 num_rows = 100000000    Any sugestions on how i can do it?
Hello Community, i have the issue that within a dashboard with auto refresh on, set to 2 minutes, the dashboards keeps showing data while the refresh (takes around 15 seconds) is running. However, t... See more...
Hello Community, i have the issue that within a dashboard with auto refresh on, set to 2 minutes, the dashboards keeps showing data while the refresh (takes around 15 seconds) is running. However, the data in the list, has a drilldown that relys on data within the table. Within the refresh those data are already not in the relevant row tokens anymore, which makes the drilldown to "$row.tokenname$" instead of the old, still shown value. Is there a way to bypass that and retain the old data? If not is there a way to immediately clear the list once the refresh starts? Thanks!  
how do i correct my email address in my new profile?
Hi All, We are embarking on moving our Splunk 8.1.3 servers from old version of RHEL to new RHEL servers. The servers names are going to be different and we currently have the config based on IP/hos... See more...
Hi All, We are embarking on moving our Splunk 8.1.3 servers from old version of RHEL to new RHEL servers. The servers names are going to be different and we currently have the config based on IP/hostname as opposed to alias. We have perused the answers community and as well, referred Splunk documentation and have identified the below approach for indexer cluster migration. There's no single Splunk document that talks about this fairly common occurrence in many environments. We also have raised a Splunk support case for advice but to no avail.  A few details about our existing cluster so you can factor that in while reviewing the below Multisite Indexer cluster with 2 indexers across 2 sites Current RF - 2 SF-2 (Origin 1 and total 2 for both) Daily ingestion of roughly 400-450 GB Current storage of 4 indexers (Roughly 9TB utilised out of 11 TB total hot_warm on each indexer). Roughly 36 TB across the indexer cluster. ~ 20K buckets with approx. 5K buckets in each indexer Retention varies between 3 months, 6 months and some on 1 year It is imperative that we have all data replicated to new servers before we retire the old ones No forwarder-->Indexer discovery Un-clustered search heads(4) including one Splunk ES 6.4.x Key concerns: Storage on existing indexers being overwhelmed. Degraded search performance on cluster Resource util increase on cluster instances Sustained replication Lack of testing ability to test the below at scale. Can you please verify/validate/provide commentary/gotchas on this approach please? Would love suggestions on improving this as well Proposed steps Step 0 - Server Build - Build new CMs and Indexers to existing specs (2RF, 2 SF, copy remote bundles on CM etc.) Step 1 - Cluster master migration Search heads - Add additional cluster master via DS push Place existing CM in maintenance mode/Stop CM Ensure new CM is started and running. Place in maintenance mode Modify server.conf on 1 indexer at a time to point to new CM Disable maintenance mode and allow new cluster master to replicate/bucket fixup Validate and confirm that no errors and no problems are seen. Step 2 - Indexer cluster migration 1 on each site Push changes to forwarders to include 2 new indexers (1 in each site)  Place new CM in maintenance mode. Add 1 indexer in each site with existing SF and RF Remove maintenance mode Issue data rebalance command (Searchable rebalance, end goal of which is to have roughly 3K-3.2K buckets in each of the 6 indexers) Wait for catch up……………………………………………….. Place new CM in maintenance mode. Modify CM RF to 4 from 2. Keep SF as same In the same maintenance mode, modify origin as 3 and total 4 for replication factor. SF remains same (On all 6 indexers.4 old + 2 new). This is to ensure each indexer in a site has 1 copy of each bucket Restart indexer cluster master (and possibly indexers) for new changes to reflect Disable maintenance mode Wait for catch up……………………………………………….. Step 3 - Indexer cluster migration 1 on each site Once caught up and validated, place cluster master in maintenance mode Place one old indexer in offline enforce-counts and add 1 new indexer on each site Disable maintenance mode Immediately issue data rebalance command Wait for catch up……………………………………………….. Step 4 – Cluster migration completion Once validated, place cluster master in maintenance mode Place the 2nd set of old indexers on each site in offline enforce-counts Modify RF to 2 and SF remains the same Modify Origin to 1 and Total to 2 RF on all indexers Restart indexers and CM as appropriate Disable maintenance mode Allow excess bucket fix up activities to complete Once cluster has regained status quo, shutdown old indexers Remove reference to old indexers from forwarders and search heads to avoid warnings/errors  
Is it possible to disable some stanzas from, for example, the Windows TA, using another TA? I apologize ahead of time if this comes off newb-ish and needlessly complex, but we've run into an interest... See more...
Is it possible to disable some stanzas from, for example, the Windows TA, using another TA? I apologize ahead of time if this comes off newb-ish and needlessly complex, but we've run into an interesting problem. We have a number of Citrix XenApp servers in our environment and a little while back, they began struggling with some of the processes that Splunk spawns through scripted inputs. They were taking up a lot of CPU and RAM. The short-term fix seemed to be disabling these inputs, like [admon] and [WinRegMon] and setting them to interval =-1. I found these stanzas in both the \etc\system\ inputs.conf and the Windows TA inputs.conf. To keep them disabled, and only put this in place on Citrix servers, I created a Citrix server class. Then I took the Windows TA, copied it, renamed it to something else, and kept everything the same except that I disabled those stanzas for this "Citrix" Windows TA (they are enabled on the standard Windows TA). This accomplished the goal of keeping them disabled for Citrix, but I'm not a fan of this solution because there are probably references within the Windows TA that need it to have the standard folder path ("Splunk_TA_windows"). I've already located and corrected a couple. So I'm interested in moving the Citrix servers back to the standard Splunk_TA_windows TA, but want to keep those stanzas disabled for just those servers. My question: Can I create a custom TA, apply it just to that Citrix server class, and just have an inputs.conf file with those stanzas disabled? I'm just not sure which TA would "win", if they had conflicting instructions for the stanzas: Splunk_TA_windows saying that [WinRegMon] is not disabled, for example, but the custom TA saying that it should be disabled. So the custom TA inputs.conf file would have stanzas like this: [WinRegMon] disabled=1 interval=-1 baseline=0 [admon] interval=-1 disabled=1 baseline=0
Hello, I'm having some issues with the addon I created. I developed the addon with a simple python script to query an API, if the result is ok I got a json object and then write it to splunk. if it ... See more...
Hello, I'm having some issues with the addon I created. I developed the addon with a simple python script to query an API, if the result is ok I got a json object and then write it to splunk. if it fails it writes a json object with the failed response. I've tested the script locally in my computer and it works, I've tested it in the addon builder and it works. Even when there is an error it returns the failed response. I have set up several inputs with it's own API url, but somehow some hosts keep failing even if the host is up, looks to me like the script is stuck, when it fails it has a exception handling function so it catches errors.  I was testing up/down process and noticed when the host is down it will never detect when the host comes back online. Does anyone knows why this is happening??  
Hi Splunker, I'm using MSA (Managed Service Account) to run my Splunk. So let's say if I changed my MSA password from AD do I need to change the password in the Splunk server as well?
Hello! I have a lookup table with fields 'name' and 'last_login'. I'm trying to find users who haven't logged in the past 30 days.  Originally, I had this:   | inputlookup Users.csv | where strpti... See more...
Hello! I have a lookup table with fields 'name' and 'last_login'. I'm trying to find users who haven't logged in the past 30 days.  Originally, I had this:   | inputlookup Users.csv | where strptime('last_login',"%m/%d/%Y %H:%M:%S") < relative_time(now(),"-30d@d") OR isnull(last_login) | sort last_login desc   However, it is only outputting users that logged in 30+ days ago. I would like to exclude users who are still logging in recently (in those 30 days). Thank you! Any help would be greatly appreciated!
Hello All, I have a large dataset "audit.cost_records" wherein I am trying to locate a correlation based on a large number of fields. These fields are large in number (over 1000 total) and many can ... See more...
Hello All, I have a large dataset "audit.cost_records" wherein I am trying to locate a correlation based on a large number of fields. These fields are large in number (over 1000 total) and many can be grouped (for my current purposes). Some groups may consist of 10+ fields while others may be only 1. Some example field names are: ab, ab-HE, ab-SC, ab-LS, rs, rs-SH, rz, xr, xr-FL, xr-SH, xr-SS in this example, all of the ab items should be grouped, as with rs, and xr.  Unfortunately, I am new to splunk and my understanding of the splunk language is elementary at best. I do have somewhat advanced or at least jorneyman knowledge in SQL and basic in a few programming languages (Java and the like). Unfortunately that doesn't seem to be helping me here. Based on several hours of searching this community and trial and error I have arrived at the below. I was trying to use wildcards to group by similar field names. I've just read somewhere that Splunk may segment the field based on the '-' character, which makes my wildcard not work as I intend. | from datamodel:"AUDIT.COST_RECORDS" | eval Group1=if(match(fieldName,"ab*"), "ABGroup", Group1) | eval Group1=if(match(fieldnName,"rs*"),"RSGroup",Group1) | timechart span=30d sum(cost) as Cost by Group1 Does anyone have any recommendations on how to solve this search? My overall intent is to have a year-to-date line chart (spanned monthly) showing cost over time for each "Group".
Hello, we configured custom business transactions on one of our customer's environments, but we couldn't see our custom business transactions on the BT menu, all our custom BTs are being displayed ... See more...
Hello, we configured custom business transactions on one of our customer's environments, but we couldn't see our custom business transactions on the BT menu, all our custom BTs are being displayed under "All Other Traffic"  group. Could you please help us with this? Note:  Same configuration/BTs are working in other environments/applications.
Hi Team, After the upgradation to splunk version 8.2.2.1, we have started seeing "warn" in the one specific dashboard with warning as "cannot expand lookup field 'triggertype' due to a reference cyc... See more...
Hi Team, After the upgradation to splunk version 8.2.2.1, we have started seeing "warn" in the one specific dashboard with warning as "cannot expand lookup field 'triggertype' due to a reference cycle in the lookup configuration.check search.log for details and update configuration to remove th reference cycle". I have seen others splunk community answers for field userid. But i would like to know the exact resolution for this triggertype field. Regards,
hi all... i'm an experienced splunk user (oldschool splunk).   All the google pages seem to be pointing to the solution for the old version of splunk dashboard builder.. all i want to do is add a t... See more...
hi all... i'm an experienced splunk user (oldschool splunk).   All the google pages seem to be pointing to the solution for the old version of splunk dashboard builder.. all i want to do is add a timepicker to my board (like we used to be able to do).   Any ideas? (Using the latest version of splunk with the dashboard studio option selected   (all the board elements are set to 24 hrs).    
Question regarding the indexes.conf on my search heads. Each index contains the paths to the home/cold/thawed directories, but they also have a frozenTimePeriodInSecs value and MaxDataSize. My questi... See more...
Question regarding the indexes.conf on my search heads. Each index contains the paths to the home/cold/thawed directories, but they also have a frozenTimePeriodInSecs value and MaxDataSize. My question is are these two values, FTPIS and MDS, able to removed from the search heads? I thought that the indexers house the values for indexes.conf size requirements and search heads only hold the paths to retrieve the data. Please help me understand. Thanks