All Topics

Top

All Topics

We have a file that is rotated at midnight every night.  The file is renamed and zipped up.  Sometimes after the log rotation Splunk does not ingest the new file. There are no errors in the Splunkd... See more...
We have a file that is rotated at midnight every night.  The file is renamed and zipped up.  Sometimes after the log rotation Splunk does not ingest the new file. There are no errors in the Splunkd log relating to crc or anything along those lines. A restart of Splunk resolves the issue however we would like to find a more permanent solution. We are on UF version, 9.0.4.   Appreciate any suggestions you may have
Hello I need a proxy connection when I use TA-tenable-easm on splunk. Is there a way or a guide to set up proxy on TA-tenable-easm?  
Hello, How to display date range from the time range dropdown selector in the Dashboard Studio? Thank you for your help I am currently using Visualization Type " Table" and create data configurati... See more...
Hello, How to display date range from the time range dropdown selector in the Dashboard Studio? Thank you for your help I am currently using Visualization Type " Table" and create data configuration with the following search: info_min_time & info_max_time gave me duplicate data for each row and I had to use dedup Is this a proper way to do it? Is there a way to use the time token ($timetoken.earliest$ or $timetoken.latest$) from the time range dropdown selector in the search from data configuration (not in XML) index=test | addinfo | eval info_min_time="From: ". strftime(info_min_time,"%b %d %Y %H:%M:%S") | eval info_max_time="To: ". strftime(info_max_time,"%b %d %Y %H:%M:%S") | dedup info_min_time, info_max_time | table info_min_time, info_max_time  
I am new to splunk and I have inherited a system that forwards log in CEF CSV format.  These logs are then tar'd up and sent to the distant end (which does happen successfully).  The issue I have is ... See more...
I am new to splunk and I have inherited a system that forwards log in CEF CSV format.  These logs are then tar'd up and sent to the distant end (which does happen successfully).  The issue I have is when the splunk server picks up the CEF CSV it has epoch time as the first entry of every log in the CEF CSV file.  This makes the next hop/stop aggregator I send to unhappy.   original host (forwarder) -> splunk host -> splunk host -> master aggregator (arcsight type server) example: 1706735561, "blah blah blah" the file cef.csv says it's doing "_time","_raw" When I look at what I think is the setup for time (etc/datetime.xml), _time does not have anything about epoch or %s in there. How do I configure the CEF CSV to omit the epoch time? As I mentioned earlier, I am totally new to splunk.  Any help would be fantastic.
Hi, We came across strange issue: cvs logs are not getting ingested when it only has only one line (in addition to the header) in a log. The same logs with two and more lines are ingested succes... See more...
Hi, We came across strange issue: cvs logs are not getting ingested when it only has only one line (in addition to the header) in a log. The same logs with two and more lines are ingested successfully Here are inputs.conf and  props.conf we are using Inputs.conf [monitor:///apps/ab_cd/resources/abcd/reports_rr/reports/abc/.../*_splunk.csv] sourcetype=source_type_name index=index_name ignoreOlderThan = 2h crcSalt = <SOURCE> props.conf [source_type_name] KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false PREAMBLE_REGEX = ^Region TIME_PREFIX= ^(?:[^,\n]*,){1} TIME_FORMAT = %Y-%m-%d MAX_TIMESTAMP_LOOKAHEAD=10 MAX_DAYS_HENCE = 5 Appreciate all the ideas
Hello, I'm starting out on my splunk journey and have been tasked with figuring out a dashboard for my executives. I created a layout for a dashboard and had the idea of creating a chart, but h... See more...
Hello, I'm starting out on my splunk journey and have been tasked with figuring out a dashboard for my executives. I created a layout for a dashboard and had the idea of creating a chart, but have been struggling with the logic.  What I'm looking to do is have a the count/average count over time by time so I have a chart of percentages of the day against their average thruput. I had a few ideas for the search but none seemed to work. could someone give me some direction please on what I've gotten so far? (its definitely wrong) index=* | where index="Index 1" OR index="Index 2" OR index="Index 3" | eval Count=sum(count(index)) / "something something something to get the average" | timechartcount by Count
Diagnose the cause of pending and stuck pods in your Kubernetes and OpenShift clusters Video Length: 2 min 23 seconds    CONTENTS | Introduction | Video |Resources | About the presenter   ... See more...
Diagnose the cause of pending and stuck pods in your Kubernetes and OpenShift clusters Video Length: 2 min 23 seconds    CONTENTS | Introduction | Video |Resources | About the presenter   In this demo, follow along as Doug Lindee uses the relationships view with correlated metrics to troubleshoot recurring cluster health violations that identify the root cause.   Additional Resources  Learn more about cluster monitoring in the documentation.   Kubernetes and App Service Monitoring About presenter Douglas Lindee Douglas Lindee joined Cisco AppDynamics as a Field Architect in late 2021, having a 20+ year career behind him in systems, application, and network monitoring, event management, reporting, and automation — most previously on an extended engagement focusing on AppDynamics. With this broad view of monitoring solutions and technology, he serves as a point of technical escalation, assisting sales teams to overcome technical challenges during the sales process.
I have a records that comes with multiple items in a single row. Is there a way i can break it down in a single row. The rest of the values will be same and can be copied. In the screen shot below, c... See more...
I have a records that comes with multiple items in a single row. Is there a way i can break it down in a single row. The rest of the values will be same and can be copied. In the screen shot below, can we break down the first row in two rows, second in 5 rows etc..  Thanks in advance to the Splunk Community. They are super helpful.     
There will be planned maintenance for Splunk Synthetic Monitoring as specified below: Realm Splunk Synthetic Monitoring Planned Maintenance Window app.jp0.signalfx.com February 15, 20... See more...
There will be planned maintenance for Splunk Synthetic Monitoring as specified below: Realm Splunk Synthetic Monitoring Planned Maintenance Window app.jp0.signalfx.com February 15, 2024 from 9.00 am PST to 11.00 am PST (GMT-7) [link to status page] app.au0.signalfx.com February 20, 2024 from 8.00 am to 10.00 am PT (GMT-7) [link to status page] app.eu0.signalfx.com February 21, 2024 from Noon to 2.00 pm PT (GMT-7) [link to status page] app.us0.signalfx.com February 22, 2024 from 7.00 am to 9.00 am PT (GMT-7) [link to status page] app.us1.signalfx.com February 23, 2024 from 7.00 am to 9.00 am PT (GMT-7) [link to status page] app.us2.signalfx.com Not Applicable / No impact During this maintenance window, the user interface of Splunk Synthetic Monitoring will not be available, and some tests may be paused momentarily. You can find which realm / region you’re using by following the steps below: In the Observability Cloud main menu, select Settings. Select your user name at the top of the Settings menu. On the Organizations tab, you can view or copy your realm, organizations and organization IDs. Please note that the planned maintenance activity only applies to Splunk Synthetic Monitoring. Other products or features of Splunk Observability Cloud, Rigor Synthetic Monitoring and Web Optimization will not be impacted by this maintenance window. For any questions, please reach out on the Splunk Support Portal to create a Support case (select Get Started > Create a Case > Support > Splunk Synthetic Monitoring).
Hi Splunkers,  Have the following situation, and interested in another opinion: We have a distributed environment with clusters indexers and SHs, and HFs in distributed sites. We are using a depl... See more...
Hi Splunkers,  Have the following situation, and interested in another opinion: We have a distributed environment with clusters indexers and SHs, and HFs in distributed sites. We are using a deployer to push out CONFs to the HFs and other assets defined by serverclass. I am trying to set-up a configuration where the HFs are receiving data from a remote host inbound on a specific TCP port. HF Deployment App: local\inputs.conf in inputs.conf, there is a stanza for the expected data being input     Remote Host 1 [tcp:12345] index = indexA sourcetype = sourceType1 disabled = 0       Now there is a TA for this data type but it has an inputs.conf defined as:     [tcp://22245] connection_host = dns index = indexSomethingElse sourcetype = sourceType disabled = 0       Which one takes precedence? And if the indexes are different, will this mess up the ingestion and indexing? Am I right in assuming that the inputs.conf defined for the overall inputs take precedence? REF: https://docs.splunk.com/Documentation/Splunk/9.1.3/Admin/Wheretofindtheconfigurationfiles
Hello,   I am using addcoltotals command to get the total value of a column and I would like to display the value returned by addcoltotals command in the subject of the email when an alert is trigg... See more...
Hello,   I am using addcoltotals command to get the total value of a column and I would like to display the value returned by addcoltotals command in the subject of the email when an alert is triggered.   my_search|chart count AS XXXX by YYYY| addcoltotals labelfield="Total Delivered"   The output is   Files | Files_Count | Total Delivered F1     |     3                   | F2     |      5                  | F3     |      3                  |            |      11               | Total   I would like 11 to be displayed in the subject line. Tried various tokens but could not get it working.   Regards  
I have AWS Cloudtrail data and want to find out how long an EC2 instance was stopped. Is it possible to subtract the EpochOT from Row 3 to Row 2 and Row 5 to Row 4 etc..      
Hello, i want to install the universal installer on a windows 11. I proceed according to these instructions:   till now what i have done below steps- 1- install Universal forwarder into window (sp... See more...
Hello, i want to install the universal installer on a windows 11. I proceed according to these instructions:   till now what i have done below steps- 1- install Universal forwarder into window (splunkforwarder-9.1.3-d95b3299fa65-x64-release.msi) 2- download License file from cloud portal (splunkclouduf.spl) 3- download WIndow TIA file on window (splunk-add-on-for-microsoft-windows_880.tgz) now i didn't understand how i can process this, please help  
Hi, I have an output like this - Location EventName ErrorCode Summary server1 Mssql.LogBackupFailed BackupAgentError Failed backup.... server2 Mssql.LogBackupFailed BackupAgentErro... See more...
Hi, I have an output like this - Location EventName ErrorCode Summary server1 Mssql.LogBackupFailed BackupAgentError Failed backup.... server2 Mssql.LogBackupFailed BackupAgentError Failed backup....   Now I am trying to combine all the values of Location, EventName, ErrorCode and Summary into one field called "newfield" , lets say using a comma "," or ";" I am trying this command -     | eval newfield= mvappend(LocationName,EventName,ErrorCode,summary)     but the output it is giving is -   server1 Mssql.LogBackupFailed BackupAgentError Failed backup....   Output I am expecting is - server1,Mssql.LogBackupFailed,BackupAgentError,Failed backup
We have application data coming from Apache Tomcat's and have a regex in place to extract exception name. But there are some tomcats sending data in a slightly different formats and the extraction do... See more...
We have application data coming from Apache Tomcat's and have a regex in place to extract exception name. But there are some tomcats sending data in a slightly different formats and the extraction doesn't work for them.  I have updated regex ready for these different formats, but want to keep the field name same, i.e. exception. How Do I manage multiple extractions against the same sourcetype while keeping the field names same? If I add these regex in transforms, would they end up conflicting with each other?  Or should I be creating them into different fields, such as exception1, exception2 and then use coalesce to eventually merge them into a single field?
Hello to everyone! One of the source types contains messages with no timestamp   <172>hostname: -Traceback: 0x138fc51 0x13928fa 0x1399b28 0x1327c33 0x3ba6c07dff 0x7fba45b0339d   To resolve th... See more...
Hello to everyone! One of the source types contains messages with no timestamp   <172>hostname: -Traceback: 0x138fc51 0x13928fa 0x1399b28 0x1327c33 0x3ba6c07dff 0x7fba45b0339d   To resolve this problem, I created a transform rule that successfully eliminated this "junk" from index   [wlc_syslog_rt0] REGEX = ^<\d+>.*?:\s-Traceback:\s+ DEST_KEY = queue FORMAT = nullQueue   But after it, I still have messages that indicate timestamp extraction failed   01-31-2024 15:08:17.539 +0300 WARN DateParserVerbose [17276 merging_0] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (20) characters of event. Defaulting to timestamp of previous event (Wed Jan 31 15:08:05 2024). Context: source=udp:1100|host=172.22.0.11|wlc_syslog|\r\n 566 similar messages suppressed. First occurred at: Wed Jan 31 15:03:13 2024     All events from this sourcetype look like this:   <172>hostname: *spamApTask0: Jan 31 12:58:47.692: %LWAPP-4-SIG_INFO1: [PA]spam_lrad.c:56582 Signature information; AP 00:57:d2:86:c0:30, alarm ON, standard sig Auth flood, track per-Macprecedence 5, hits 300, slot 0, channel 1, most offending MAC 54:14:f3:c8:a1:b3     Before asking, I tried to find events without a timestamp by using regex and cluster commands but didn't find anything So, is it normal behavior, and splunk indicates timestamp absence before moving to nullQueue or did I do something wrong?
Hello Splunk community, I would like to know if there is a way to change the database location of monitored file in slunk universal forwarder, similarly to what fluentbit allow with the DB propert... See more...
Hello Splunk community, I would like to know if there is a way to change the database location of monitored file in slunk universal forwarder, similarly to what fluentbit allow with the DB property (https://docs.fluentbit.io/manual/pipeline/inputs/tail). My splunk universal forwarder is running in a container and access a shared mount containing my applications log files and in case the the splunk uf container restart I would like to prevent the monitored files to be reindexed from the beginning. Is there a config to choose the database location? Cheers in advance
Hello I have a question. We have lots of indexes, and rather than specify each one, I use index=*proxy* to search across index=some_proxy1 and index=some_proxy2 I understand that obviously index=* ... See more...
Hello I have a question. We have lots of indexes, and rather than specify each one, I use index=*proxy* to search across index=some_proxy1 and index=some_proxy2 I understand that obviously index=* is a bad thing to do, but does index=*proxy* really cause bad things to happen in Splunk? I've been using syntax like this for several years, and nothing bad has ever happened. I did a test on one index with index=*proxy* This search has completed and has returned 1,000 results by scanning 117,738 events in 7.115 seconds with index=some_proxy1 This search has completed and has returned 1,000 results by scanning 121,162 events in 7.318 seconds As you can see in the example using *proxy* over the same time period was actually quicker.
Hi,  I have this query that calulates how much time the alerts are open, so far so good, but unfortunatelly if the rule name repeats (duplicate rule name) in a new event, then now() function does no... See more...
Hi,  I have this query that calulates how much time the alerts are open, so far so good, but unfortunatelly if the rule name repeats (duplicate rule name) in a new event, then now() function does not know how to calculate the correct time for the first rule that triggered.  How can I calculate SLA time without deleting duplicates and keeping the same structure as showed in the picture ?