All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team  Can you please let me know why i am not able fetch the base_date in the dashoard using the below logic.  Please help me to fix this issue. Splunk query :  <input type="time" token="tim... See more...
Hi Team  Can you please let me know why i am not able fetch the base_date in the dashoard using the below logic.  Please help me to fix this issue. Splunk query :  <input type="time" token="time_token"> <label>TIME</label> <default> <earliest>-1d@d</earliest> <latest>@d</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query> | inputlookup V19_Job_data.csv | eval base_date = strftime(strptime("$time_token.earliest$", "%Y-%m-%dT%H:%M:%S"), "%Y-%m-%d") | eval expected_epoch = strptime(base_date . " " . expected_time, "%Y-%m-%d %H:%M") | eval deadline_epoch = strptime(base_date . " " . deadline_time, "%Y-%m-%d %H:%M") | join type=left job_name run_id [ search index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console system = EOCA host = ddebmfr.beprod01.eoc.net (( TERM(JobA) OR TERM(JobB) ) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") | eval Function = case(like(TEXT, "%ENDED - ABEND%"), "ABEND" , like(TEXT, "%ENDED - TIME%"), "ENDED" , like(TEXT, "%STARTED - TIME%"), "STARTED") | eval _time_epoch = _time | eval run_id=case( date_hour &lt; 14, "morning", date_hour &gt;= 14, "evening" ) | eval job_name=if(searchmatch("JobA"), "JobA", "JobB") | stats latest(_time_epoch) as job_time by job_name, run_id ] | eval buffer = 60 | eval status=case( isnull(job_time), "Not Run", job_time &gt; deadline_epoch, "Late", job_time &gt;= expected_epoch AND job_time &lt;= deadline_epoch, "On Time", job_time &lt; expected_epoch, "Early" ) | convert ctime(job_time) | table job_name, run_id, expected_time, expected_epoch , base_date, deadline_time, job_time, status</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest>
This is one huge search. Check each of the "component searches" on their own and see how they fare. Since some of them are raw event searches over a half a year worth of data, possibly through a sign... See more...
This is one huge search. Check each of the "component searches" on their own and see how they fare. Since some of them are raw event searches over a half a year worth of data, possibly through a significant subset of that data, I expect them to be slow just because you have to plow through all those events (and one of those subsearches has a very ugly field=* condition which makes Splunk have to parse every single event!). If you need that literal functionality from those searches, I see no other way than using some acceleration techniques - the searches themselves don't seem to be very much "optimizable". You might try to change some of them to tstats with PREFIX/TERM but only if your data fits the prerequisites.
Hi @tomapatan  If you structure your lookups so that the more generic match is lower down the lookup than your more specific match, and you have your "Max Matches" set to 1 then it should match the ... See more...
Hi @tomapatan  If you structure your lookups so that the more generic match is lower down the lookup than your more specific match, and you have your "Max Matches" set to 1 then it should match the more specific value first, else match the more generic one if not found. For example - this is my test lookup: You can see the more specific values are at the top. I have configured a lookup definition with WILDCARD matches and a max matches = 1 Then I run a search, if country/town isnt set I am setting to "Unknown" but it could be any value.   It maps to 999 because this is the generic value for host1 if town/country is not set. If I now set the country=UK:   I get a more specific value returned because it matches country=UK town=* If I do host=host999 it matches host* in the lookup and I get an interestingField value of GHI:   Remember that you have to pass all the fields you want to match on to the lookup command, and you should have the more generic matches lower down the lookup file.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
when I checked more in depth logs, I see perfect logs have less than 10000 lines where the logs which are truncating have 10001 lines. But I set truncated value to 50000 why this is not applying? 
Hi @Praz_123  In props.conf, use the following settings to extract the timestamp in your sourcetype:   [yourSourcetype] TIME_PREFIX = ^" TIME_FORMAT = %m/%d/%y %H:%M:%SZ Explanation... See more...
Hi @Praz_123  In props.conf, use the following settings to extract the timestamp in your sourcetype:   [yourSourcetype] TIME_PREFIX = ^" TIME_FORMAT = %m/%d/%y %H:%M:%SZ Explanation: TIME_PREFIX anchors the timestamp extraction immediately after the opening quote at the start of the line. TIME_FORMAT matches the date/time format: month/day/two-digit year, space, hour:minute:second, and a trailing "Z" for UTC. For more info check out https://docs.splunk.com/Documentation/Splunk/latest/Data/Configuretimestamprecognition If you are able to share a raw event (redacted if required) we can validate it but the above should hopefully work.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunklearner  You mention that the props/transforms are pushed to your Indexers, but is it also installed on the HF pulling the Akamai logs? Can you validate that the relevant props/transforms ... See more...
Hi @splunklearner  You mention that the props/transforms are pushed to your Indexers, but is it also installed on the HF pulling the Akamai logs? Can you validate that the relevant props/transforms with the TRUNCATE set to a higher-than-longest-event value are installed on the HF? $SPLUNK_HOME/bin/splunk btool props list sony_waf --debug If you run this on your HF you should see your TRUNCATE value to the expected high value. What length are your logs being truncated to? Your approach of using DS->CM->IDX is interesting...but I dont think this is the problem here if the Akamai logs are being pulled by a HF - Ultimately we need to ensure the HF has the props!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Few event logs are getting truncated while others are getting perfectly. We are using akamai add-on to pull logs to Splunk. HF (akamai input configured) ---> sent to indexers in DS all apps will be... See more...
Few event logs are getting truncated while others are getting perfectly. We are using akamai add-on to pull logs to Splunk. HF (akamai input configured) ---> sent to indexers in DS all apps will be there (where all props and transforms) which will be pushed to CM and from CM will be pushing to individual indexers. props.conf in DS (Ds --> CM --> IND) [sony_waf]  TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK = true EVENT_BREAKER_ENABLE = true SHOULD_LINEMERGE = False TRUNCATE = 50000   Few logs are getting perfectly. what to do now? Please suggest.
Hi  I need the same time in events and _time  while importing the data getting the time difference what to write in time_prefix field   
In terms of understanding which indexes are NOT being accessed. This is actually pretty challenging for a number of reaons, whilst its possible to look in the _audit index and see which indexes are b... See more...
In terms of understanding which indexes are NOT being accessed. This is actually pretty challenging for a number of reaons, whilst its possible to look in the _audit index and see which indexes are being searched, its pretty difficult to determine exactly which indexes have been searched for a number of reasons: Different users have access to different indexers, so using wildcards (e.g. index=*) can mean different indexes are accessed depending on roles. Macros/tags/eventtypes may contain index references and would need to be determined and expanded Different user roles may have different srchIndexesDefault which means they might not specify an index to search as rely on the defaults. Are you using Smartstore/Splunk Cloud? This may offer some slightly different approaches to this as we could look at smartstore cache activity to try and determine indexes accessed.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @spm807 , as I said, try using throttling in your alerts, it's the solution to your problem. Ciao. Giuseppe
Hi @megha_04  Regarding controlling the sizes of logs - I would recommend looking at https://www.splunk.com/en_us/blog/tips-and-tricks/managing-index-sizes-in-splunk.html as there is a little much t... See more...
Hi @megha_04  Regarding controlling the sizes of logs - I would recommend looking at https://www.splunk.com/en_us/blog/tips-and-tricks/managing-index-sizes-in-splunk.html as there is a little much to fit into an answer here! But typically it is managed by setting the frozenTimePeriodInSecs per index to control how long (in seconds) your index retains data for.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @super_edition  A simple REX command to split out should hopefully work well here: | rex field=url "(?<commonUrl>\/experience\/.*)\/?" | stats count by commonUrl   Full example: |makeresu... See more...
Hi @super_edition  A simple REX command to split out should hopefully work well here: | rex field=url "(?<commonUrl>\/experience\/.*)\/?" | stats count by commonUrl   Full example: |makeresults count=2 | streamstats count | eval url=case(count==1,"/us/english/experience/dining/onboard-menu/",count==2,"/ae/english/experience/dining/onboard-menu/") | rex field=url "(?<commonUrl>\/experience\/.*)\/?" | stats count by commonUrl  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@super_edition  | makeresults | eval data="/ae/english/experience/dining/onboard-menu/=1;/ae/english/experience/woyf/=2;/uk/english/experience/dining/onboard-menu/=1;/us/english/experience/dining/... See more...
@super_edition  | makeresults | eval data="/ae/english/experience/dining/onboard-menu/=1;/ae/english/experience/woyf/=2;/uk/english/experience/dining/onboard-menu/=1;/us/english/experience/dining/onboard-menu/=1;/ae/arabic/experience/dining/onboard-menu/=1;/english/experience/dining/onboard-menu/=1" | makemv delim=";" data | mvexpand data | rex field=data "(?<uri>[^=]+)=(?<count>\d+)" | eval count=tonumber(count) | eval normalized_uri = replace(uri, "^/[^/]+/[^/]+", "") | stats sum(count) as hits by normalized_uri  
Is there a way to detect unused indexes in Splunk via a query? Also, how can we control the growth of log sizes effectively?
Hello Everyone, Below is my splunk query: index="my_index" uri="*/experience/*" | stats count as hits by uri | sort -hits | head 20 which returns me the output as below /ae/english/experience... See more...
Hello Everyone, Below is my splunk query: index="my_index" uri="*/experience/*" | stats count as hits by uri | sort -hits | head 20 which returns me the output as below /ae/english/experience/dining/onboard-menu/ 1 /ae/english/experience/woyf/ 2 /uk/english/experience/dining/onboard-menu/ 1 /us/english/experience/dining/onboard-menu/ 1 /ae/arabic/experience/dining/onboard-menu/ 1 /english/experience/dining/onboard-menu/ 1   I need to aggregate the url count into common url. For example: /experience/dining/onboard-menu/ 5 /experience/woyf/ 2   Appreciate your help on this. Thanks in advance
I'm working with a CSV lookup  that contains multiple fields which may include wildcard (*) values. The lookup is structured such that some rows are very specific and others are generic (e.g. *, *, ... See more...
I'm working with a CSV lookup  that contains multiple fields which may include wildcard (*) values. The lookup is structured such that some rows are very specific and others are generic (e.g. *, *, *, HOST, *). I want to enrich events from my base search with the best matching Offset (name of the field) from the lookup. Challenges: Using lookup definition with match_type=WILDCARD(...) only works well if there’s a unique match — but in my case, I need to evaluate multiple potential matches and choose the most specific one. Using | map works correctly, but it's too slow.    
@PrewinThomas , I removed the transaction command. Lets make it simple. I need a table like this which plots the blocked numbers of emails, firewalls, DLP, EDR, Web proxy and WAF for last 6 months ... See more...
@PrewinThomas , I removed the transaction command. Lets make it simple. I need a table like this which plots the blocked numbers of emails, firewalls, DLP, EDR, Web proxy and WAF for last 6 months showing the count for each month and its total similar to this. What I did was modified each query to give data for last 6months for each parameter and I then simply append that to one table which is not a good practice. Hence I am here asking help from the experts. I can share the individual queries if that helps- Email - | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time  -- here from the datamodel pps, I am simply counting the Spam, inbound and discard emails DLP- index=forcepoint_dlp sourcetype IN ("forcepoint:dlp","forcepoint:dlp:csv") action="blocked" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count(action) as Blocked by _time Web Proxy- index=zscaler* action=blocked sourcetype="zscalernss-web" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count as Blocked by _time EDR- index=crowdstrike-hc sourcetype="CrowdStrike:Event:Streams:JSON" "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* earliest=-6mon@mon latest=now | bin _time span=1mon | search action=blocked NOT action=allowed | stats dc(event.DetectId) as Blocked by _time WAF- tstats `security_content_summariesonly` count as Blocked from datamodel=Web where sourcetype IN ("alertlogic:waf","aemcdn","aws:*","azure:firewall:*") AND Web.action="block" earliest=-6mon@mon latest=now by _time   -- web is an accelerated datamodel in my environment `security_content_summariesonly` expands to summariesonly=false allow_old_summaries=true fillnull_value=null lastly, Firewall-  | tstats `security_content_summariesonly` count as Blocked from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") AND All_Traffic.action="blocked" earliest=-6mon@mon latest=now by _time     
Hi @dmcnulty  The captain is refusing the sync request because the member doesn't have a valid baseline, and the subsequent resync attempt failed because a required snapshot file is missing or inacc... See more...
Hi @dmcnulty  The captain is refusing the sync request because the member doesn't have a valid baseline, and the subsequent resync attempt failed because a required snapshot file is missing or inaccessible. The recommended action is to perform a destructive configuration resync on the affected member (SH3). This forces the member to discard its current replicated configuration and pull a fresh copy from the captain. Run the following command on the affected search head member (SH3): splunk resync shcluster-replicated-config --answer-yes This command will discard the contents of $SPLUNK_HOME/etc/shcluster/apps and $SPLUNK_HOME/etc/shcluster/local on SH3 and attempt to fetch a complete, fresh copy from the captain. Ensure the captain (SH2) is healthy and has sufficient disk space and resources before running this command. If the destructive resync fails with the same or a similar error about a missing snapshot file, it might indicate a more severe issue with the captain's snapshot or the member's ability to process the bundle. If it fails then check the captain's splunkd.log for any specific errors around replication bundles. If the issue persists, removing the member from the cluster and re-adding it is the standard, albeit more disruptive, next step.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
After running out of disk space on a search head (part of a cluster), now fixed and all SH's rebooted. I get this error: ConfReplicationException Error pulling configurations from the search head c... See more...
After running out of disk space on a search head (part of a cluster), now fixed and all SH's rebooted. I get this error: ConfReplicationException Error pulling configurations from the search head cluster captain (SH2:8089); Error in fetchFrom, at=: Non-200 status_code=500: refuse request without valid baseline; snapshot exists at op_id=xxxx6e8e for repo=SH2:8089". Search head cluster member (SH3:8089) is having trouble pulling configs from the captain (SH2:8089). xxxxx Consider performing a destructive configuration resync on this search head cluster member.   Ran "splunk resync shcluster-replicated-config"  and get this: ConfReplicationException : Error downloading snapshot: Non-200 status_code=400: Error opening snapshot_file' /opt/splunk/var/run/snapshot/174xxxxxxxx82aca.bundle: No such file or directory.    In the snapshot folder there is nothing, sometimes a few files, they don't match the other search heads. 'splunk show bundle-replication-status'  is all green and the same as the other 2 SH's.   Is there a force resync switch?  Really can't remove this SH and run 'clean all'.   Thank you!    
@PickleRick , summariesonly=false allow_old_summaries=true fillnull_value=null expands to summariesonly=false allow_old_summaries=true fillnull_value=null I re-arranged the parameters a bit and ... See more...
@PickleRick , summariesonly=false allow_old_summaries=true fillnull_value=null expands to summariesonly=false allow_old_summaries=true fillnull_value=null I re-arranged the parameters a bit and now it seems to be loading in around 6 mins now. I need to optimize it. Here the expanded view of the query- | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time | eval Source="Email" | append [ search index=forcepoint_dlp sourcetype IN ("forcepoint:dlp","forcepoint:dlp:csv") action="blocked" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count(action) as Blocked by _time | eval Source="DLP"] | append [ search index=zscaler* action=blocked sourcetype="zscalernss-web" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count as Blocked by _time | eval Source="Web Proxy"] | append [ search index=crowdstrike-hc sourcetype="CrowdStrike:Event:Streams:JSON" "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* earliest=-6mon@mon latest=now | bin _time span=1mon | transaction "event.DetectId" | search action=blocked NOT action=allowed | stats dc(event.DetectId) as Blocked by _time | eval Source="EDR"] | append [| tstats summariesonly=false allow_old_summaries=true fillnull_value=null count as Blocked from datamodel=Web where sourcetype IN ("alertlogic:waf","aemcdn","aws:*","azure:firewall:*") AND Web.action="block" earliest=-6mon latest=now by _time | eval Source="WAF"] | append [| tstats summariesonly=false allow_old_summaries=true fillnull_value=null count as Blocked from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") AND All_Traffic.action="blocked" earliest=-6mon@mon latest=now by _time | eval Source="Firewall"] | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source MonthNum MonthName | xyseries Source MonthName Blocked | addinfo | table Source Dec Jan Feb Mar Apr May Jun   The goal is to get a table like this -