All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunkers, I have the following query returning the search results, index="demo1" | search "metrics.job.overall_status"="FAILED" OR "metrics.job.overall_status"="PASSED" metrics.app="*" ... See more...
Hello Splunkers, I have the following query returning the search results, index="demo1" | search "metrics.job.overall_status"="FAILED" OR "metrics.job.overall_status"="PASSED" metrics.app="*" | eval timestamp=strftime(floor('metrics.job.end_ts'), "%Y-%m-%d %H:%M:%S") | sort 0 metrics.app timestamp | streamstats current=f last(metrics.job.overall_status) as prev_status last(timestamp) as prev_timestamp by metrics.app | fillnull value="NONE" prev_status | fillnull value="NONE" prev_timestamp | eval failed_timestamp=if(metrics.job.overall_status="FAILED" AND (prev_status="NONE" OR prev_status!="FAILED"), timestamp, null()) | table metrics.app, metrics.job.overall_status, prev_status, timestamp, prev_timestamp,failed_timestamp The result is null in every entry. What is wrong? even though there are FAILED status with the above specified conditions but the failed_timestamp results are null() can anyone please share how to correct this...
anyway I have distributed deployment - Master Depoloyment  - Deployer - INdexer A  - Indexer B  - License Master + Monitoring console - Forwarder A  - FOrwarder B  - Search Head A  - Search ... See more...
anyway I have distributed deployment - Master Depoloyment  - Deployer - INdexer A  - Indexer B  - License Master + Monitoring console - Forwarder A  - FOrwarder B  - Search Head A  - Search Head B - Search Head C    where should I add the props.conf and transform.conf?
exactly the way I wanted... thanks a ton
Hi, We have a map visualizatioin in splunk studio dashboard where we use marker type. We could configure tooltip where it shows the values when the marker is hovered on. Is it possible to show the... See more...
Hi, We have a map visualizatioin in splunk studio dashboard where we use marker type. We could configure tooltip where it shows the values when the marker is hovered on. Is it possible to show the tooltip by default always without hovering it. Thanks in advance
@ITWhisperer  I just tired if we can use appendpipe it will the give the sum of both values. But it does not work. When i select All it should show all instead of star and corresponding values. I'm h... See more...
@ITWhisperer  I just tired if we can use appendpipe it will the give the sum of both values. But it does not work. When i select All it should show all instead of star and corresponding values. I'm having only one header. 
The regex does not match the sample event.  It will only work for events from the host "CN-SH-PSG-01".  To match any host name, try this regex: "(?:[\w-]+)"\s+(?<bytes_in>\d+)\s+(?<client_ip>\d+\.\d... See more...
The regex does not match the sample event.  It will only work for events from the host "CN-SH-PSG-01".  To match any host name, try this regex: "(?:[\w-]+)"\s+(?<bytes_in>\d+)\s+(?<client_ip>\d+\.\d+\.\d+\.\d+)\s+(?<status_code>\d+)\s+(?<action>[^\s]+)\s+(?<bytes_out>\d+)\s+(?<bytes_out2>[^\s]+)\s+(?<http_method>[^\s]+)\s+(?<protocol>[^\s]+)\s+(?<domain>[^\s]+)\s+(?<port>\d+)\s+[^\s]+\s+(?<user>[^\s]+)\s+[^\s]+\s+[^\s]+\s+(?<mime_type>[^\s]+)\s+[^\s]+\s+"(?<user_agent>[^"]+)" Notice I removed the meaningless "^.*" from the beginning.  That is implied in all regular expressions without the ^ anchor. The FORMAT setting must be on a separate line, but I presume that's a copy-paste error. Changes to transforms require a restart of the indexer and apply only to new events. Make sure the source name associated with the data is exactly "syslog".   If it doesn't, try using the sourcetype name.
Certificates are not accessible through the GUI.  You'll need to find someone with CLI access or fix the problem in the app and re-install it.
Hi everyone, I'm currently using VMware vRealize Log Insight to collect logs from ESXi hosts, vCenter servers, and NSX components. I then forward these logs to Splunk. However, I've noticed that Log... See more...
Hi everyone, I'm currently using VMware vRealize Log Insight to collect logs from ESXi hosts, vCenter servers, and NSX components. I then forward these logs to Splunk. However, I've noticed that Log Insight doesn't always parse logs correctly. I'm considering switching to direct integration using the Splunk Add-ons for VMware and NSX. My Questions: Log Volume Reduction: For those who have used Log Insight, what kind of log volume reduction have you achieved through filtering and aggregation before forwarding logs to Splunk? License Usage: How does the Splunk license usage compare between using Log Insight for pre-processing and direct ingestion with Splunk Add-ons? Best Practices: Are there any best practices or tips for optimizing Splunk license usage with either approach? Context: Current log volume: Approximately 300 GB per day (raw). Goals: Improve log parsing accuracy while optimize Splunk license usage. Any insights or experiences would be greatly appreciated! Thanks in advance!
I'm trying to extract field for Symantec ProxySG with transform.conf & props.conf but it isn't working. Here is the sample logs: Aug  4 16:31:58 2024-08-04 08: 31:28 "hostname" 5243 xx.xx.xx.xx 2... See more...
I'm trying to extract field for Symantec ProxySG with transform.conf & props.conf but it isn't working. Here is the sample logs: Aug  4 16:31:58 2024-08-04 08: 31:28 "hostname" 5243 xx.xx.xx.xx 200 TCP_TUNNELED 6392 2962 CONNECT tcp domain.com 443 / - yyyy - xx.xx.xx.xx xx.xx.xx.xx "None" - - - - OBSERVED - - xx.xx.xx.xx - 7b711515341865e8-0000000008da5077-0000000066af3c5e - - Here is my configuration:  REGEX = ^.*"CN-SH-PSG-01"\s+(?<bytes_in>\d+)\s+(?<client_ip>\d+\.\d+\.\d+\.\d+)\s+(?<status_code>\d+)\s+(?<action>[^\s]+)\s+(?<bytes_out>\d+)\s+(?<bytes_out2>[^\s]+)\s+(?<http_method>[^\s]+)\s+(?<protocol>[^\s]+)\s+(?<domain>[^\s]+)\s+(?<port>\d+)\s+[^\s]+\s+(?<user>[^\s]+)\s+[^\s]+\s+[^\s]+\s+(?<mime_type>[^\s]+)\s+[^\s]+\s+"(?<user_agent>[^"]+)"FORMAT = bytes_in::$1 client_ip::$2 status_code::$3 action::$4 bytes_out::$5 bytes_out2::$6 http_method::$7 protocol::$8 domain::$9 port::$10 user::$11 mime_type::$12 user_agent::$13 [source::syslog] TRANSFORMS-proxysg_field_extraction = proxysg_field_extraction   I've tried to change the config but the result teh field is not extracted & I have tried my regex using regex101.com and is doing fine
While adding device to the add-on citrix it gives the following message:   Failed to verify your SSL certificate. Verify your SSL configurations in splunk_ta_citrix_netscaler_settings.conf and retr... See more...
While adding device to the add-on citrix it gives the following message:   Failed to verify your SSL certificate. Verify your SSL configurations in splunk_ta_citrix_netscaler_settings.conf and retry. where can is solve this issue through GUI interface as i can't access CLI
And when I use custom format as shown below its returning 0 events: index=main sourcetype="access_combined_wcookie" earliest="1/15/2024:20:00:00" latest="2/22/2024:20:00:00" If you read the d... See more...
And when I use custom format as shown below its returning 0 events: index=main sourcetype="access_combined_wcookie" earliest="1/15/2024:20:00:00" latest="2/22/2024:20:00:00" If you read the document @PickleRick posted, you know that this is the only accepted format.  To diagnose why you get zero return, you have to prove that you had events in that period.  In other words, what makes you think 0 is not the correct result?  Is it possible that your events were not ingested with the correct _time value? Forget half years ago.  Does searches like index=main sourcetype="access_combined_wcookie" earliest=-1d return the correct results?  How about the first month of the year? index=main sourcetype="access_combined_wcookie" earliest=-0y@y latest=-0y@y+1mon All this is to say, without proper context (raw data, event frequency, etc.), your question is unanswerable.
| eventstats dc(field1) as dc_field1 | stats dc(field2) as dc_field2, max(dc_field1) by field1
I want to get below in single query 1. dc of field1 overall 2. dc of field2 by field1
I want to get below in single query 1. dc of field1 overall 2. dc of field2 by field1
Hello Splunkers!! I am getting below message populating while loading Splunk dashboard as I am using one javascript and css in the dashboard. Please help to fix this error so I will completely dis... See more...
Hello Splunkers!! I am getting below message populating while loading Splunk dashboard as I am using one javascript and css in the dashboard. Please help to fix this error so I will completely disappear. Error while loading Splunk dashboard. Error showing in developer console.  
https://docs.splunk.com/Documentation/Splunk/latest/Search/Specifytimemodifiersinyoursearch But you can also (and it saves you issues with time zones) specify it as epoch timestamp.
Your custom format (mm/dd/yyyy:HH:MM:SS) should work assuming you have events in the specified time range
Hi Splunkers! I wish to get data in a specific time range using earliest and latest command . I have checked with time picker events are there within the specified range. But when I am trying to r... See more...
Hi Splunkers! I wish to get data in a specific time range using earliest and latest command . I have checked with time picker events are there within the specified range. But when I am trying to run a spl query its not working : I have tried with ISO format and custom format as shown below . When I use ISO format its giving error index=main sourcetype="access_combined_wcookie" earliest="2024-01-15T20:00:00" latest="2024-02-22T20:00:00" And when I use custom format as shown below its returning 0 events: index=main sourcetype="access_combined_wcookie" earliest="1/15/2024:20:00:00" latest="2/22/2024:20:00:00"   Please help I want to do this using earliest and latest command only
Your lookup seems to contain wildcarded entries. How is Splunk supposed to know what hosts this should match (assuming you even have your lookup defined correctly with a wildcard match) if you have n... See more...
Your lookup seems to contain wildcarded entries. How is Splunk supposed to know what hosts this should match (assuming you even have your lookup defined correctly with a wildcard match) if you have no events to match with?
Which exact fields are you interested in?