All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We came across strange issue: cvs logs are not getting ingested when it only has only one line (in addition to the header) in a log. The same logs with two and more lines are ingested succes... See more...
Hi, We came across strange issue: cvs logs are not getting ingested when it only has only one line (in addition to the header) in a log. The same logs with two and more lines are ingested successfully Here are inputs.conf and  props.conf we are using Inputs.conf [monitor:///apps/ab_cd/resources/abcd/reports_rr/reports/abc/.../*_splunk.csv] sourcetype=source_type_name index=index_name ignoreOlderThan = 2h crcSalt = <SOURCE> props.conf [source_type_name] KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false PREAMBLE_REGEX = ^Region TIME_PREFIX= ^(?:[^,\n]*,){1} TIME_FORMAT = %Y-%m-%d MAX_TIMESTAMP_LOOKAHEAD=10 MAX_DAYS_HENCE = 5 Appreciate all the ideas
Hello, I'm starting out on my splunk journey and have been tasked with figuring out a dashboard for my executives. I created a layout for a dashboard and had the idea of creating a chart, but h... See more...
Hello, I'm starting out on my splunk journey and have been tasked with figuring out a dashboard for my executives. I created a layout for a dashboard and had the idea of creating a chart, but have been struggling with the logic.  What I'm looking to do is have a the count/average count over time by time so I have a chart of percentages of the day against their average thruput. I had a few ideas for the search but none seemed to work. could someone give me some direction please on what I've gotten so far? (its definitely wrong) index=* | where index="Index 1" OR index="Index 2" OR index="Index 3" | eval Count=sum(count(index)) / "something something something to get the average" | timechartcount by Count
I have a records that comes with multiple items in a single row. Is there a way i can break it down in a single row. The rest of the values will be same and can be copied. In the screen shot below, c... See more...
I have a records that comes with multiple items in a single row. Is there a way i can break it down in a single row. The rest of the values will be same and can be copied. In the screen shot below, can we break down the first row in two rows, second in 5 rows etc..  Thanks in advance to the Splunk Community. They are super helpful.     
Hi Splunkers,  Have the following situation, and interested in another opinion: We have a distributed environment with clusters indexers and SHs, and HFs in distributed sites. We are using a depl... See more...
Hi Splunkers,  Have the following situation, and interested in another opinion: We have a distributed environment with clusters indexers and SHs, and HFs in distributed sites. We are using a deployer to push out CONFs to the HFs and other assets defined by serverclass. I am trying to set-up a configuration where the HFs are receiving data from a remote host inbound on a specific TCP port. HF Deployment App: local\inputs.conf in inputs.conf, there is a stanza for the expected data being input     Remote Host 1 [tcp:12345] index = indexA sourcetype = sourceType1 disabled = 0       Now there is a TA for this data type but it has an inputs.conf defined as:     [tcp://22245] connection_host = dns index = indexSomethingElse sourcetype = sourceType disabled = 0       Which one takes precedence? And if the indexes are different, will this mess up the ingestion and indexing? Am I right in assuming that the inputs.conf defined for the overall inputs take precedence? REF: https://docs.splunk.com/Documentation/Splunk/9.1.3/Admin/Wheretofindtheconfigurationfiles
Hello,   I am using addcoltotals command to get the total value of a column and I would like to display the value returned by addcoltotals command in the subject of the email when an alert is trigg... See more...
Hello,   I am using addcoltotals command to get the total value of a column and I would like to display the value returned by addcoltotals command in the subject of the email when an alert is triggered.   my_search|chart count AS XXXX by YYYY| addcoltotals labelfield="Total Delivered"   The output is   Files | Files_Count | Total Delivered F1     |     3                   | F2     |      5                  | F3     |      3                  |            |      11               | Total   I would like 11 to be displayed in the subject line. Tried various tokens but could not get it working.   Regards  
I have AWS Cloudtrail data and want to find out how long an EC2 instance was stopped. Is it possible to subtract the EpochOT from Row 3 to Row 2 and Row 5 to Row 4 etc..      
Hello, i want to install the universal installer on a windows 11. I proceed according to these instructions:   till now what i have done below steps- 1- install Universal forwarder into window (sp... See more...
Hello, i want to install the universal installer on a windows 11. I proceed according to these instructions:   till now what i have done below steps- 1- install Universal forwarder into window (splunkforwarder-9.1.3-d95b3299fa65-x64-release.msi) 2- download License file from cloud portal (splunkclouduf.spl) 3- download WIndow TIA file on window (splunk-add-on-for-microsoft-windows_880.tgz) now i didn't understand how i can process this, please help  
Hi, I have an output like this - Location EventName ErrorCode Summary server1 Mssql.LogBackupFailed BackupAgentError Failed backup.... server2 Mssql.LogBackupFailed BackupAgentErro... See more...
Hi, I have an output like this - Location EventName ErrorCode Summary server1 Mssql.LogBackupFailed BackupAgentError Failed backup.... server2 Mssql.LogBackupFailed BackupAgentError Failed backup....   Now I am trying to combine all the values of Location, EventName, ErrorCode and Summary into one field called "newfield" , lets say using a comma "," or ";" I am trying this command -     | eval newfield= mvappend(LocationName,EventName,ErrorCode,summary)     but the output it is giving is -   server1 Mssql.LogBackupFailed BackupAgentError Failed backup....   Output I am expecting is - server1,Mssql.LogBackupFailed,BackupAgentError,Failed backup
We have application data coming from Apache Tomcat's and have a regex in place to extract exception name. But there are some tomcats sending data in a slightly different formats and the extraction do... See more...
We have application data coming from Apache Tomcat's and have a regex in place to extract exception name. But there are some tomcats sending data in a slightly different formats and the extraction doesn't work for them.  I have updated regex ready for these different formats, but want to keep the field name same, i.e. exception. How Do I manage multiple extractions against the same sourcetype while keeping the field names same? If I add these regex in transforms, would they end up conflicting with each other?  Or should I be creating them into different fields, such as exception1, exception2 and then use coalesce to eventually merge them into a single field?
Hello to everyone! One of the source types contains messages with no timestamp   <172>hostname: -Traceback: 0x138fc51 0x13928fa 0x1399b28 0x1327c33 0x3ba6c07dff 0x7fba45b0339d   To resolve th... See more...
Hello to everyone! One of the source types contains messages with no timestamp   <172>hostname: -Traceback: 0x138fc51 0x13928fa 0x1399b28 0x1327c33 0x3ba6c07dff 0x7fba45b0339d   To resolve this problem, I created a transform rule that successfully eliminated this "junk" from index   [wlc_syslog_rt0] REGEX = ^<\d+>.*?:\s-Traceback:\s+ DEST_KEY = queue FORMAT = nullQueue   But after it, I still have messages that indicate timestamp extraction failed   01-31-2024 15:08:17.539 +0300 WARN DateParserVerbose [17276 merging_0] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (20) characters of event. Defaulting to timestamp of previous event (Wed Jan 31 15:08:05 2024). Context: source=udp:1100|host=172.22.0.11|wlc_syslog|\r\n 566 similar messages suppressed. First occurred at: Wed Jan 31 15:03:13 2024     All events from this sourcetype look like this:   <172>hostname: *spamApTask0: Jan 31 12:58:47.692: %LWAPP-4-SIG_INFO1: [PA]spam_lrad.c:56582 Signature information; AP 00:57:d2:86:c0:30, alarm ON, standard sig Auth flood, track per-Macprecedence 5, hits 300, slot 0, channel 1, most offending MAC 54:14:f3:c8:a1:b3     Before asking, I tried to find events without a timestamp by using regex and cluster commands but didn't find anything So, is it normal behavior, and splunk indicates timestamp absence before moving to nullQueue or did I do something wrong?
Hello Splunk community, I would like to know if there is a way to change the database location of monitored file in slunk universal forwarder, similarly to what fluentbit allow with the DB propert... See more...
Hello Splunk community, I would like to know if there is a way to change the database location of monitored file in slunk universal forwarder, similarly to what fluentbit allow with the DB property (https://docs.fluentbit.io/manual/pipeline/inputs/tail). My splunk universal forwarder is running in a container and access a shared mount containing my applications log files and in case the the splunk uf container restart I would like to prevent the monitored files to be reindexed from the beginning. Is there a config to choose the database location? Cheers in advance
Hello I have a question. We have lots of indexes, and rather than specify each one, I use index=*proxy* to search across index=some_proxy1 and index=some_proxy2 I understand that obviously index=* ... See more...
Hello I have a question. We have lots of indexes, and rather than specify each one, I use index=*proxy* to search across index=some_proxy1 and index=some_proxy2 I understand that obviously index=* is a bad thing to do, but does index=*proxy* really cause bad things to happen in Splunk? I've been using syntax like this for several years, and nothing bad has ever happened. I did a test on one index with index=*proxy* This search has completed and has returned 1,000 results by scanning 117,738 events in 7.115 seconds with index=some_proxy1 This search has completed and has returned 1,000 results by scanning 121,162 events in 7.318 seconds As you can see in the example using *proxy* over the same time period was actually quicker.
Hi,  I have this query that calulates how much time the alerts are open, so far so good, but unfortunatelly if the rule name repeats (duplicate rule name) in a new event, then now() function does no... See more...
Hi,  I have this query that calulates how much time the alerts are open, so far so good, but unfortunatelly if the rule name repeats (duplicate rule name) in a new event, then now() function does not know how to calculate the correct time for the first rule that triggered.  How can I calculate SLA time without deleting duplicates and keeping the same structure as showed in the picture ?   
Iam getting different results for same query when checked in statistics and visualizations, Attaching both screenshots        
I made a graph that send time data at click point. I use "fieldformat" to change time data shown. This is my code about time part at this graph.  | rename _time AS Date | fieldformat Date = strft... See more...
I made a graph that send time data at click point. I use "fieldformat" to change time data shown. This is my code about time part at this graph.  | rename _time AS Date | fieldformat Date = strftime(Date,"%Y-%m-%d")  So the token data send like this "2024-01-23"   I want to set the time with the data received from the token about another graph. For example, If time_token send me "2024-01-23", I want to show only the data from 2024-01-23 in another graph. I tried this code, but it not worked. (Maybe it cause about format changing) | where _time = time_token How could I solve this problem? 
lets say i have a query which is giving no result at present date but may give in future .  In this query I have calculated timeval = strftime(_time,"%y-%m-%d")  , since there is not data coming so ... See more...
lets say i have a query which is giving no result at present date but may give in future .  In this query I have calculated timeval = strftime(_time,"%y-%m-%d")  , since there is not data coming so "_time" will be empty hence timeval does not give any result . But still I have to show timeval with the help of present time , how can i do that .  i also used at the end of query appendpipe[stats count| where count==0  eval timeval=strftime(now(),%d/%m/%Y) | where count==0] but still no result.
Installed universal forwarder credential package and UF agent in a Windows Machine. Still not receiving data. Restart of splunk forwarder done. Installation of both package is with same user i.e. ro... See more...
Installed universal forwarder credential package and UF agent in a Windows Machine. Still not receiving data. Restart of splunk forwarder done. Installation of both package is with same user i.e. root. Unable to even receive any type of data from the windows OS.Need assistance.
Hi Splunkers, today I have a "curiosity" about an architectural design I examinated last week. The idea is the following: different regions (the 5 continents, in a nutshell), every one with its set ... See more...
Hi Splunkers, today I have a "curiosity" about an architectural design I examinated last week. The idea is the following: different regions (the 5 continents, in a nutshell), every one with its set of log sources and Splunk Components. All Splunk "items" are on prem: Forwarder, Indexers, SH and so on. More over, every region has 2 SH: one with Enterprise Security and another one without it. Untile now, "nothing new under the sun", like we say in Italy. The new element, I men new for me and my experience, is the following one: there is a "centralized" cluster of SH, each one with Enterprise Security installed on it, that should collect the notables events from every regional ES. So, the flow about those component should be: Europe ES Notables -> "Centralized" ES Cluster America ES Notables -> "Centralized" ES Cluster And so on. So, my wonder is: is there any doc about forward Notables events from a ES platform to another one? I searched but I didn't find anything about that (probabile I searched bad, I know).  
I have JSON files which I am trying to event split as the JSON contains multiple events within each log. Here is an example of what the log would look like.     { "vulnerability": [ { ... See more...
I have JSON files which I am trying to event split as the JSON contains multiple events within each log. Here is an example of what the log would look like.     { "vulnerability": [ { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false }, { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false } ], "next": "test", "total_count": 109465 }      In this example there would be two separate events that I need extracted out. I am essentially trying to pull out the event1 and event2 nests. Each log should have this same exact JSON format but there could be any number of events included in them.  First event     { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false }     Second event   { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false }       I also want to exclude the opening      { "vulnerability": [     and closing      ], "next": "test", "total_count": 109465 }      portions of the log files.   Am I missing something on how to set this sourcetype up? I have the following currently but that does not seem to be working LINE_BREAKER = \{(\r+|\n+|\t+|\s+)"event":