All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@tscroggins , is there an alternative to using xyseries because the results are limited and the results displayed are therefore truncated? 
F5 WAF logs
Hi @michael_vi , as @richgalloway and @kiran_panchavat said, you can use regex101 to find the correct regex to cut a part ot your json. Only one attention point: json format has a well defined stru... See more...
Hi @michael_vi , as @richgalloway and @kiran_panchavat said, you can use regex101 to find the correct regex to cut a part ot your json. Only one attention point: json format has a well defined structure, so beware in cutting a part of the event, because if you break the json structure, the INDEXED_EXTRACTION=JSON and the spath command will not work correctly, and you have to manually parse all the fields! Ciao. Giuseppe
Please share some raw anonymised sample events in a code block using the </> button so we can see what you are dealing with.
Didnt get u but This is the query which i built up so far which is capturing time difference in HH:MM:SS in stats view but i want to display the same duration in chart as well index=music Job=* |... See more...
Didnt get u but This is the query which i built up so far which is capturing time difference in HH:MM:SS in stats view but i want to display the same duration in chart as well index=music Job=* | stats values(host) as Host values(Job) as Job, earliest(_time) as start_time latest(_time) as end values(x) as "File Name" by oid | eval Duration=(end-start_time) | eval end = strftime(end,"%m-%d-%Y %H:%M:%S") | eval start_time = strftime(start_time,"%m-%d-%Y %H:%M:%S") | rename opid as OPID, start_time as "Start Time", end as "End Time" | chart list(Duration) as Duration by "Start Time" | fieldformat Duration=tostring(round(Duration, 0), "Duration")  
Hi @dinesh001kumar , the only way is to put them in an app or an add-on and submit them to upload. If there isn't any issue you can upload it. Otherwise you could ask this to Splunk Cloud Support,... See more...
Hi @dinesh001kumar , the only way is to put them in an app or an add-on and submit them to upload. If there isn't any issue you can upload it. Otherwise you could ask this to Splunk Cloud Support, but I'm not sure that they will do. Ciao. Giuseppe
Start with your requirements. This is a very imprecise requirement. What does "neat" mean? What information are you being asked to show? Next, start creating searches to calculate that information fr... See more...
Start with your requirements. This is a very imprecise requirement. What does "neat" mean? What information are you being asked to show? Next, start creating searches to calculate that information from your logs. Then choose a way to display that information in an informative way. Next, if you want some more help here, please post your raw events i.e. unformatted, preferably using a code block using the </> button above. That way we can simulate your situation and suggest some searches.
Hi @Karthikeya  It sounds like you might need to work with the dashboard users to understand exactly what they want out of the dashboard - what is their main goal when they look at the dashboard? We... See more...
Hi @Karthikeya  It sounds like you might need to work with the dashboard users to understand exactly what they want out of the dashboard - what is their main goal when they look at the dashboard? We do not want to overwhelm the users with charts they cannot read or make sense of.  I'd start by understanding the main purpose of the dashboard, and then the top 3-4 statistics or details they want to be able to see. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @livehybrid , using only Splunk the only way is indexing all the logs and use dedup in your searches, but in this way you pay twice the license, because it isn't possible in Splunk to create a fi... See more...
Hi @livehybrid , using only Splunk the only way is indexing all the logs and use dedup in your searches, but in this way you pay twice the license, because it isn't possible in Splunk to create a filter to avoid duplicates before indexing. The only solution is to take logs using an rsyslog and writing logs in files, then preparse the logs using a script, but it's very heavy for the system. Ciao. Giuseppe
Ah, is there anything unique in a pair of events to split it by? Oh anything on the event to show is the start or end? Please could you share some anonymised sample events for us to look at in order... See more...
Ah, is there anything unique in a pair of events to split it by? Oh anything on the event to show is the start or end? Please could you share some anonymised sample events for us to look at in order to help further? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi, I think ultimately this might depend on the source of the data, what are you sending to the syslog server? 
There are multiple job with each having unique start and end time
Hi @sureshkumaar , it isn't a good idea to attach a new question to a closed question, even if on the same topic: it's always better to open a new one to have a surely faster and probably better ans... See more...
Hi @sureshkumaar , it isn't a good idea to attach a new question to a closed question, even if on the same topic: it's always better to open a new one to have a surely faster and probably better answer to your question. Anyway, if the regex that you're using matches all the events to filter, it's correct and you can use it. Ciao. Giuseppe
I have a requirement to create a dashboard with following Json data: all_request_headers: { [-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin ... See more...
I have a requirement to create a dashboard with following Json data: all_request_headers: { [-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin      Host: wasphictst-wdc.hc.cloud.uk.sony      User-Agent: insomnia/2021.5.3    }    all_response_headers: { [-]      Connection: keep-alive      Content-Length: 196      Content-Type: text/html; charset=iso-8859-1      Date: Fri, 14 Feb 2025 15:51:13 GMT      Server: Apache/2.4.37 (Red Hat Enterprise Linux)      Strict-Transport-Security: max-age=31536000; includeSubDomains    } waf_log: { [-]      allowlist_configured: false      allowlist_processed: false      application_rules_configured: false      application_rules_processed: false      latency_request_body_phase: 1544      latency_request_header_phase: 351      latency_response_body_phase: 15      latency_response_header_phase: 50      memory_allocated: 71496      omitted_app_rule_stats: { [+]      }      omitted_signature_stats: { [+]      }      psm_configured: false      psm_processed: false      rules_configured: true      rules_processed: true      status: PASSED    } Fields are getting auto extracted like waf_log.allowlist_configured ... etc. They want a neat dashboard for request headers, response headers, waf log details etc. How to create this dashboard. I am confused. If we create based on fields then there will be so many panels right.
Is the data being sent from the origin to both syslog servers at the same time? -- Yes, both syslog servers picking same log and ingesting at the same time. Is it possible to control this behaviou... See more...
Is the data being sent from the origin to both syslog servers at the same time? -- Yes, both syslog servers picking same log and ingesting at the same time. Is it possible to control this behaviour so it sends only to the primary, or to the standby if it fails? ---  How to achieve this? 
Hi @splunklearner  It sounds like your duplication is coming before it hits Splunk - Its not easy to deduplicate this on the way through, instead you might want to look at how the data is sent to sy... See more...
Hi @splunklearner  It sounds like your duplication is coming before it hits Splunk - Its not easy to deduplicate this on the way through, instead you might want to look at how the data is sent to syslog.  Is the data being sent from the origin to both syslog servers at the same time? Is it possible to control this behaviour so it sends only to the primary, or to the standby if it fails? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @dinesh001kumar  You can add files into an app and then package this to be uploaded to Splunk Cloud - it isnt possible to upload CSS/HTML via the UI in Splunk Cloud. Create an app and add the re... See more...
Hi @dinesh001kumar  You can add files into an app and then package this to be uploaded to Splunk Cloud - it isnt possible to upload CSS/HTML via the UI in Splunk Cloud. Create an app and add the required files to <APP NAME>/appserver/static/ so that they are then accessible within the app. Have a look at https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/UseCSS#Customize_styling_and_behavior_for_one_dashboard for more info on using CSS too. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
We have two syslog standalone servers (both are active) (one named as primary and the other is contingency) with UF installed in it forwards data to Splunk. We have different indexes configured for t... See more...
We have two syslog standalone servers (both are active) (one named as primary and the other is contingency) with UF installed in it forwards data to Splunk. We have different indexes configured for these two servers.  Now the issue is same log is getting indexed into both servers which resulting in duplication of logs in Splunk.  Syslog 1 --- index = sony_a == Same log Syslog 2 --- index = sony_b == Same log When we search with index=sony* it is giving same logs for two indexes which is duplication. how to avoid two syslog servers from getting indexed same log twice? 
Is Job unique for each start/end?  If so I would suggest something like this: index=music Job=* | stats earliest(_time) as start_time, latest(_time) as end_time by Job | eval Duration=(end_time-sta... See more...
Is Job unique for each start/end?  If so I would suggest something like this: index=music Job=* | stats earliest(_time) as start_time, latest(_time) as end_time by Job | eval Duration=(end_time-start_time) ``` The rest of your SPL here, such as ``` | chart values(Duration) as Duration by start_time   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hello, Hello, we are on ES 7.3.2. We are noticing there is difference in count of Notable alerts visible under "Incident Review" page versus to the number of events in the notable index for that sam... See more...
Hello, Hello, we are on ES 7.3.2. We are noticing there is difference in count of Notable alerts visible under "Incident Review" page versus to the number of events in the notable index for that same time period. For example, Our Incident Review page when filtered to show all notables for previous month' time range shows 4648 notable alerts generated. Screenshot attached. But, if check index=notable for previous months' time range, it shows 4653 events. Likewise, we are seeing this difference for every month. Ideally both numbers should match. How to find out what is causing this mismatch and what is the reason exactly?