All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I got production logs as txt files containing many Fields that are always in the format $_XXX: YYY with XXX being the Field name and YYY being the field value. All fields belong together as one s... See more...
Hi, I got production logs as txt files containing many Fields that are always in the format $_XXX: YYY with XXX being the Field name and YYY being the field value. All fields belong together as one set of production data for one device. The complete file has a timestamp ($_Date: ...) somewhere in the text.   I now want that whole file parsed like a csv file but with only one row of data. So that the XXXs are my value names and the YYYs are the actual values like on this picture from a csv:  Whatever I try, Splunk always wants to handle my Values as separate events instead of one large single event. Is there a simple way to achieve this?  
Hi, I am new to Splunk and working with parking records. Within my events, I have a permit_expiry field, which is a date and time a few or so hours after the initial data timestamp. How do I displ... See more...
Hi, I am new to Splunk and working with parking records. Within my events, I have a permit_expiry field, which is a date and time a few or so hours after the initial data timestamp. How do I display the number of permit_expiry which are occurring within the hour. I understand there is the now() function which holds the current time, but am unsure how to utilise it. My draft search is below, but I know there is something missing within the "now() + 1 hour". sourcetype="parking_log" | where permit_expiry < now() + 1 hour | stats count by permit_expiry Many thanks!
Hi i hope everyone will be fine.i am facing issue .I am forwarding logs to third party like port of any system.i seen error message at port i am using python third party library scoket.io i face erro... See more...
Hi i hope everyone will be fine.i am facing issue .I am forwarding logs to third party like port of any system.i seen error message at port i am using python third party library scoket.io i face error "code 400, message Bad request version ('nCurrent=0')".help me to solve my issue.with python standard libraray name socket work fine with splunk.when i use with scoket.io libraray its crate error bad request.    
Hi, I am new to Splunk and working with parking records. I am trying to display parking spaces that are currently not in use.   Within my monitored data each record has the following fields: the... See more...
Hi, I am new to Splunk and working with parking records. I am trying to display parking spaces that are currently not in use.   Within my monitored data each record has the following fields: the time data was created, which is when the car parked permit_expiry, which is a couple of hours after the creation time parking_space, which is a number between 1 and 99, that doesn't repeat until the permit_expiry has passed. I also have a separate lookup table/csv file called parking_lots of all parking_space (1-99), and their respective parking_lot.   This is what I have come up with so far: sourcetype="parking_log" | where now() < expiry_time | lookup parking_lots parking_space | *display parking_space that don't appear in the above search (1-99)* I am struggling to understand how to display the parking spaces, as well as use of the now() function. Many thanks!
Hello all, I am extracting a field which is coming in multiple formats, however I found that once of the format is not working as expected. Details below. Please help me in extracting all of these f... See more...
Hello all, I am extracting a field which is coming in multiple formats, however I found that once of the format is not working as expected. Details below. Please help me in extracting all of these formats without affecting others.   Example 1:(highlighted is the field I am trying to extract) APPLICATION-MIB::evtDevice = STRING: "Server2.Application.APP.INTRANET" APPLICATION-MIB::evtComponent =   Example 2:(highlighted is the field I am trying to extract) APPLICATION-MIB::evtDevice = STRING: "Server1" APPLICATION-MIB::evtComponent =   Example 3:(highlighted is the field I am trying to extract) APPLICATION-MIB::evtDevice = STRING: "SG2-SWMGMT-CAT-001" APPLICATION-MIB::evtComponent =   regex used : APPLICATION-MIB::evtDevice\s+=\sSTRING:\s\"(?<source_host>\w+[a-zA-Z0-9-_]\w+) The above regex is working for both example 1 and 2. However, for example 3 this is working only for the underlined fields and not everything highlighted.   Please help in getting this worked.
I work for a bio company (Creative Biogene) and I want to know how can I avoid data loss in my working computer?
I am new to splunk and I am a bit lost reading the documentation for how to create a dashboard and implement inputs to prompt the user for values to use in searches. The data I have is a series of pr... See more...
I am new to splunk and I am a bit lost reading the documentation for how to create a dashboard and implement inputs to prompt the user for values to use in searches. The data I have is a series of prices that I am comparing against an average price for a certain time period (example 30 days). My goal is to create a dashboard that allows the user to enter a value that will be used in my search in an equation that increases some values to go over this threshold and some to go below. I already have this query written out. I am not sure how to proceed with creating the dashboard and inputs and then incorporating my already written query. Do I need to make this a saved search? Please help thank you!
I am trying to produce the following output : app_name request_id time workload at the time(requests per second) App1 123 1000 ? App2 1234 1000 ?   I have two queries that ret... See more...
I am trying to produce the following output : app_name request_id time workload at the time(requests per second) App1 123 1000 ? App2 1234 1000 ?   I have two queries that return : 1. A table with the requests taking the most time app_name request_id time app1 1 1000   2. Numeric value that returns the requests per second for a given app app_name requests per second app1 10   How can I join the results from two different queries to produce the final table above? Thank you!
I have been trying to get nmap output into Splunk. I thought the xml output would be nice and straightforward! Whilst the events are separated, the issue is having multiple values of the same field ... See more...
I have been trying to get nmap output into Splunk. I thought the xml output would be nice and straightforward! Whilst the events are separated, the issue is having multiple values of the same field in an event. automatic field extractions pick up the first <port... section but the rest are ignored. I tried using KV_MODE=xml but that didn't make a difference. I thought Splunk was quite happy pulling in multi values with xml but maybe its not quite the xml Splunk is expecting. I am on Splunk Cloud so cli changes are not an option. Any pointers appreciated! Thanks.      <host starttime="1633560153" endtime="1633560291"><status state="up" reason="timestamp-reply" reason_ttl="34"/> <address addr="81.123.123.123" addrtype="ipv4"/> <hostnames> <hostname name="host.name.resolved.com" type="PTR"/> </hostnames> <ports><port protocol="tcp" portid="21"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="ftp" method="table" conf="3"/></port> <port protocol="tcp" portid="22"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="ssh" method="table" conf="3"/></port> <port protocol="tcp" portid="23"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="telnet" method="table" conf="3"/></port> <port protocol="tcp" portid="80"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="http" method="table" conf="3"/></port> <port protocol="tcp" portid="443"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="https" method="table" conf="3"/></port> <port protocol="tcp" portid="8000"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="http-alt" method="table" conf="3"/></port> <port protocol="tcp" portid="8080"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="http-proxy" method="table" conf="3"/></port> <port protocol="tcp" portid="8888"><state state="filtered" reason="no-response" reason_ttl="0"/><service name="sun-answerbook" method="table" conf="3"/></port> </ports> <times srtt="15851" rttvar="15851" to="100000"/> </host>    
I'm trying to figure out how to get the time difference between two events that use the same UUID. However, the second event doesn't always happen. How can I add to my query or alter it to show the t... See more...
I'm trying to figure out how to get the time difference between two events that use the same UUID. However, the second event doesn't always happen. How can I add to my query or alter it to show the time difference between event1 and event2 AND just make event2 NULL if it does not exist with the UUID. Here is what I have started that shows the UUID, time_start of event1, time_finish of event2 and difference. I think I might need to use the transaction command instead of stats but would like to avoid anything too resource intensive. index=* sourcetype=* eventCode="event1" serviceCallUUID=* | stats latest(_time) as time_start by serviceCallUUID | join serviceCallUUID [ search index=* sourcetype=* eventCode="event2" serviceCallUUID=* | stats latest(_time) as time_finish by serviceCallUUID ] | eval difference=time_finish-time_start | eval difference=strftime(difference,"%M:%S") | eval time_finish=strftime(time_finish,"%m-%d-%Y %H:%M:%S.%f") | eval time_start=strftime(time_start,"%m-%d-%Y %H:%M:%S.%f")   Thanks!
Alerts and Reports are both persisted at savedsearches.conf . How does the UI decide that a certain entry shall be displayed under the Alerts page (../app_name/alerts) ? At Question Report v.s. A... See more...
Alerts and Reports are both persisted at savedsearches.conf . How does the UI decide that a certain entry shall be displayed under the Alerts page (../app_name/alerts) ? At Question Report v.s. Alert, what's the difference?  this is mentioned in the second paragaraph as: "while we use Alert for a Search that will make a determination to take action in contacting the outside world via email or script execution if its results match a criteria." Question 1: What are the settings that make an entry show under Alerts (and not under Reports) ? Question 2: I want to deploy the alerts of my app with sole action of "Add to Triggered Alerts" (for which I do use the setting: alert.track = 1). No email, no script . Is this possible ?
udp7511 syslog transmission was set up on three firewalls. The same port is not registered on the splank web. I used the method below, but it failed. However, logs are sent when set to another port... See more...
udp7511 syslog transmission was set up on three firewalls. The same port is not registered on the splank web. I used the method below, but it failed. However, logs are sent when set to another port on the splunk web. /opt/splunk/etc/apps/search/local [udp://7511] connection_host = ip host = 192.168.10.10 index = fw1 source = fw1_source sourcetype = syslog [udp://7511] connection_host = ip host = 192.168.10.20 index = fw2 source = fw2_source sourcetype = syslog [udp://7511] connection_host = ip host = 192.168.10.30 index = fw3 source = fw3_source sourcetype = syslog
Our cloud instance was upgraded 3 days ago, since then the log volume increased by about 20% - we're logging same amount of data as for the past 3+ years - have anyone had to deal with similar issue?... See more...
Our cloud instance was upgraded 3 days ago, since then the log volume increased by about 20% - we're logging same amount of data as for the past 3+ years - have anyone had to deal with similar issue?    I looked at indexes - they all seem to increased logging proportionally, which leads me to believe it a change in the current version...that made it "less efficient" in this respect...   Thanks!
Hi, I created a new application for AppD for my system's UAT environment. These are Windows Server 2019 IIS web and application servers with the .NET agent. After installing the AppD .NET agent ... See more...
Hi, I created a new application for AppD for my system's UAT environment. These are Windows Server 2019 IIS web and application servers with the .NET agent. After installing the AppD .NET agent and having it running, it does report back to the controller, but we observed new errors in our application and the business process we are trying to run is failing with this: Full Error Message: System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel. If I disable AppD, then the process works and the TLS error goes away.   So AppD is actually causing an impact on our environment. Any ideas on how I can fix it?
Attached screenshot  is a list of 15 query ids with started, ended, bstarted (15 minute bucket) and query duration. Tried with concurrency cmd:       |where started <= "2021-09-20 15:45:00" AND... See more...
Attached screenshot  is a list of 15 query ids with started, ended, bstarted (15 minute bucket) and query duration. Tried with concurrency cmd:       |where started <= "2021-09-20 15:45:00" AND ended >= "2021-09-20 15:45:00"| eval bstarted = strptime(started, "%Y-%m-%d %H:%M:%S.%3N")|bin span=15m bstarted | eval bstarted = bstarted+900| where strptime(started, "%Y-%m-%d %H:%M:%S.%3N") <= bstarted and strptime(ended, "%Y-%m-%d %H:%M:%S.%3N") >= bstarted|eval bstarted=strftime(bstarted,"%F %T")|eval st=strptime(started, "%Y-%m-%d %H:%M:%S.%3N") | concurrency duration=query_duration start=st|table query_id started bstarted ended query_duration concurrency       surprised to see that the concurrency values are a sequence from 1 to 15? Does not quite make sense. 2 Tried with stats dc(query_id)  as concurrency_ct by bstarted  and did not get the desired results as per screenshot         "20210920_193943_02412_rhmkv","2021-09-20 15:39:43.236","2021-09-20 15:45:00","2021-09-20 15:57:57.829",1094, "20210920_192921_02318_rhmkv","2021-09-20 15:29:21.670","2021-09-20 15:30:00","2021-09-20 16:03:59.951",2078, "20210920_193005_02322_rhmkv","2021-09-20 15:30:06.022","2021-09-20 15:45:00","2021-09-20 15:46:33.583",987, "20210920_192946_02321_rhmkv","2021-09-20 15:29:46.959","2021-09-20 15:30:00","2021-09-20 16:13:08.574",2601,       There are 4 queries_duration > 900 seconds, desired concurrency_cnt are:       "2021-09-20 15:30:00",2 "2021-09-20 15:45:00",15 (2 more from the 2078 and 2601 seconds duration values) "2021-09-20 16:00:00",4 (4 more from the 2078 and 2601, 1094, 987 seconds duration values)       Was hoping concurrency command would give me the results. Not sure where is the mistake? Would appreciate some inputs, guidance. Thanks.      
Hi all, I'm currently trying to use splunk to create an alert for the following scenario: I have a search that tell's me the number os rows and partitions a data pipeline ingested, so basically i a... See more...
Hi all, I'm currently trying to use splunk to create an alert for the following scenario: I have a search that tell's me the number os rows and partitions a data pipeline ingested, so basically i already extract the following fields: - Table Name - Number of partitions - Number of rows I also have a dashboard that shows me the timechart of the number of partitions and rows across different executions during the time. What i need in this example, is to have an alert that get triggered when the number of the partitions or rows have more than a specified % of difference between executions. So in this example, the executions 1 and 2 have a low difference between then, but the execution 3 is clearly an outlier, that should be alerted.  Execution 1:  table_name = table_1 num_part     = 12 num_rows   = 1400 Execution 2: table_name = table_1 num_part = 10 num_rows = 1000 Execution 3: table_name = table_1 num_part = 10000 num_rows = 100000000    Any sugestions on how i can do it?
Hello Community, i have the issue that within a dashboard with auto refresh on, set to 2 minutes, the dashboards keeps showing data while the refresh (takes around 15 seconds) is running. However, t... See more...
Hello Community, i have the issue that within a dashboard with auto refresh on, set to 2 minutes, the dashboards keeps showing data while the refresh (takes around 15 seconds) is running. However, the data in the list, has a drilldown that relys on data within the table. Within the refresh those data are already not in the relevant row tokens anymore, which makes the drilldown to "$row.tokenname$" instead of the old, still shown value. Is there a way to bypass that and retain the old data? If not is there a way to immediately clear the list once the refresh starts? Thanks!  
how do i correct my email address in my new profile?
Hi All, We are embarking on moving our Splunk 8.1.3 servers from old version of RHEL to new RHEL servers. The servers names are going to be different and we currently have the config based on IP/hos... See more...
Hi All, We are embarking on moving our Splunk 8.1.3 servers from old version of RHEL to new RHEL servers. The servers names are going to be different and we currently have the config based on IP/hostname as opposed to alias. We have perused the answers community and as well, referred Splunk documentation and have identified the below approach for indexer cluster migration. There's no single Splunk document that talks about this fairly common occurrence in many environments. We also have raised a Splunk support case for advice but to no avail.  A few details about our existing cluster so you can factor that in while reviewing the below Multisite Indexer cluster with 2 indexers across 2 sites Current RF - 2 SF-2 (Origin 1 and total 2 for both) Daily ingestion of roughly 400-450 GB Current storage of 4 indexers (Roughly 9TB utilised out of 11 TB total hot_warm on each indexer). Roughly 36 TB across the indexer cluster. ~ 20K buckets with approx. 5K buckets in each indexer Retention varies between 3 months, 6 months and some on 1 year It is imperative that we have all data replicated to new servers before we retire the old ones No forwarder-->Indexer discovery Un-clustered search heads(4) including one Splunk ES 6.4.x Key concerns: Storage on existing indexers being overwhelmed. Degraded search performance on cluster Resource util increase on cluster instances Sustained replication Lack of testing ability to test the below at scale. Can you please verify/validate/provide commentary/gotchas on this approach please? Would love suggestions on improving this as well Proposed steps Step 0 - Server Build - Build new CMs and Indexers to existing specs (2RF, 2 SF, copy remote bundles on CM etc.) Step 1 - Cluster master migration Search heads - Add additional cluster master via DS push Place existing CM in maintenance mode/Stop CM Ensure new CM is started and running. Place in maintenance mode Modify server.conf on 1 indexer at a time to point to new CM Disable maintenance mode and allow new cluster master to replicate/bucket fixup Validate and confirm that no errors and no problems are seen. Step 2 - Indexer cluster migration 1 on each site Push changes to forwarders to include 2 new indexers (1 in each site)  Place new CM in maintenance mode. Add 1 indexer in each site with existing SF and RF Remove maintenance mode Issue data rebalance command (Searchable rebalance, end goal of which is to have roughly 3K-3.2K buckets in each of the 6 indexers) Wait for catch up……………………………………………….. Place new CM in maintenance mode. Modify CM RF to 4 from 2. Keep SF as same In the same maintenance mode, modify origin as 3 and total 4 for replication factor. SF remains same (On all 6 indexers.4 old + 2 new). This is to ensure each indexer in a site has 1 copy of each bucket Restart indexer cluster master (and possibly indexers) for new changes to reflect Disable maintenance mode Wait for catch up……………………………………………….. Step 3 - Indexer cluster migration 1 on each site Once caught up and validated, place cluster master in maintenance mode Place one old indexer in offline enforce-counts and add 1 new indexer on each site Disable maintenance mode Immediately issue data rebalance command Wait for catch up……………………………………………….. Step 4 – Cluster migration completion Once validated, place cluster master in maintenance mode Place the 2nd set of old indexers on each site in offline enforce-counts Modify RF to 2 and SF remains the same Modify Origin to 1 and Total to 2 RF on all indexers Restart indexers and CM as appropriate Disable maintenance mode Allow excess bucket fix up activities to complete Once cluster has regained status quo, shutdown old indexers Remove reference to old indexers from forwarders and search heads to avoid warnings/errors  
Is it possible to disable some stanzas from, for example, the Windows TA, using another TA? I apologize ahead of time if this comes off newb-ish and needlessly complex, but we've run into an interest... See more...
Is it possible to disable some stanzas from, for example, the Windows TA, using another TA? I apologize ahead of time if this comes off newb-ish and needlessly complex, but we've run into an interesting problem. We have a number of Citrix XenApp servers in our environment and a little while back, they began struggling with some of the processes that Splunk spawns through scripted inputs. They were taking up a lot of CPU and RAM. The short-term fix seemed to be disabling these inputs, like [admon] and [WinRegMon] and setting them to interval =-1. I found these stanzas in both the \etc\system\ inputs.conf and the Windows TA inputs.conf. To keep them disabled, and only put this in place on Citrix servers, I created a Citrix server class. Then I took the Windows TA, copied it, renamed it to something else, and kept everything the same except that I disabled those stanzas for this "Citrix" Windows TA (they are enabled on the standard Windows TA). This accomplished the goal of keeping them disabled for Citrix, but I'm not a fan of this solution because there are probably references within the Windows TA that need it to have the standard folder path ("Splunk_TA_windows"). I've already located and corrected a couple. So I'm interested in moving the Citrix servers back to the standard Splunk_TA_windows TA, but want to keep those stanzas disabled for just those servers. My question: Can I create a custom TA, apply it just to that Citrix server class, and just have an inputs.conf file with those stanzas disabled? I'm just not sure which TA would "win", if they had conflicting instructions for the stanzas: Splunk_TA_windows saying that [WinRegMon] is not disabled, for example, but the custom TA saying that it should be disabled. So the custom TA inputs.conf file would have stanzas like this: [WinRegMon] disabled=1 interval=-1 baseline=0 [admon] interval=-1 disabled=1 baseline=0