All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That might be a bit more complicated than that. The main premise that for thawing data you're not ingesting anything is of course true but. 1) If you don't have a specific license, Splunk Enterpris... See more...
That might be a bit more complicated than that. The main premise that for thawing data you're not ingesting anything is of course true but. 1) If you don't have a specific license, Splunk Enterprise installs with the default trial license. It has all (ok, most) of the features but it is time-limited. 2) After the trial period ends - you end up with the free license which doesn't let you schedule searches or define roles/users. You might try to run the zero-bytes license normally meant for forwarders.
But what constitutes those as "common"? As long as you can answer this question, adjusting your results will be relatively easy.
Add to this the fact that searches can be created dynamically by means of subsearches and/or map command and there is no way to find all indexes (not) accessed by looking at searches. One could hypo... See more...
Add to this the fact that searches can be created dynamically by means of subsearches and/or map command and there is no way to find all indexes (not) accessed by looking at searches. One could hypotesize that you could try to leverage some OS-level monitoring to find whether the actual index directories are accessed but that could also not yield reasonable results since Splunk's housekeeping threads must access the indexes to enforce retention policies and data lifecycle. Having said that - you can search _internal and _audit logs for executed searches and try to build a list of indexes which were used and thus limit your investigation whether anyone uses the ingested data to only a subset of indexes not mentioned in that list.
I see. Now I know why did the Validate Python report an error.  However, as mentioned earlier. This block of code is automatically generated when I ammend the visual editor. Changing the "code_nam... See more...
I see. Now I know why did the Validate Python report an error.  However, as mentioned earlier. This block of code is automatically generated when I ammend the visual editor. Changing the "code_names" to either "action_names" or "custom_function_names" will result in disabling the visual editor. Which would create a big trouble for my future development of this playbook. 
Hi @shangxuan_shi  The phantom.completed method doesnt take a code_names param, the function accepts the following: phantom.completed(action_names=None, playbook_names=None, custom_function_names=N... See more...
Hi @shangxuan_shi  The phantom.completed method doesnt take a code_names param, the function accepts the following: phantom.completed(action_names=None, playbook_names=None, custom_function_names=None, trace=False)  Check out https://docs.splunk.com/Documentation/Phantom/4.10.7/PlaybookAPI/PlaybookAPI#:~:text=action%20and%20callbacks.-,completed,-The%20completed%20API for more details on the phantom.completed method  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have not encounter this error previously. When I join two code block to an action block using the visual editor. A join_***_***_1 block will be created.  This auto generated block is using the the... See more...
I have not encounter this error previously. When I join two code block to an action block using the visual editor. A join_***_***_1 block will be created.  This auto generated block is using the the "code_name" parameter which is triggering the unexpected-keyword-arg error.  I believe by deleting this auto generated block would be able to resolve the problem. But making changes to this auto generated block, it will disable the visual editor, which is not the right situation.  Any other alternative solution to resolve this problem?    
Hi @Praz_123 , could you share a sample of your logs in text format? Ciao. Giuseppe
This is actually similar to another question I responded to recently at https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-Studio-time-range-input/m-p/745721#M58657 This is the snip... See more...
This is actually similar to another question I responded to recently at https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-Studio-time-range-input/m-p/745721#M58657 This is the snippet which calculated the time string from the time picker: | makeresults | eval earliest=$global_time.earliest|s$, latest=$global_time.latest|s$ | eval earliest_epoch = IF(match(earliest,"[0-9]T[0-9]"),strptime(earliest, "%Y-%m-%dT%H:%M:%S.%3N%Z"),earliest), latest_epoch = IF(match(latest,"[0-9]T[0-9]"),strptime(latest, "%Y-%m-%dT%H:%M:%S.%3N%Z"),latest)   @livehybrid wrote: Hi @abhishekP  This is an interesting one. When selecting a relative time window the earliest/latest are values like "-1d@d" which are valid for the earliest/latest field in a search - however when you select specific dates/between dates etc then it returns the full date string such as "2025-05-07T18:47:22.565Z" Such a value is not supported by the earliest/latest field in a Splunk search, to get around this I have put together a table off the side of the display with a search which converts dates into epoch where required. you can then use "$timetoken:result.earliest_epoch$" and "$timetoken:result.latest_epoch$" as tokens in your other searches like this:   Below is the full JSON of the dashboard so you can have a play around with it - hopefully this helps! { "title": "testing", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_2FDRkepv": { "dataSources": { "primary": "ds_IPGx8Y5Y" }, "options": {}, "type": "splunk.events" }, "viz_V1oldcrB": { "options": { "markdown": "earliest: $global_time.earliest$ \nlatest: $global_time.latest$ \nearliest_epoch: $timetoken:result.earliest_epoch$ \nlatest_epoch:$timetoken:result.latest_epoch$" }, "type": "splunk.markdown" }, "viz_bhZcZ5Cz": { "containerOptions": {}, "context": {}, "dataSources": { "primary": "ds_KXR2SF6V" }, "options": {}, "showLastUpdated": false, "showProgressBar": false, "type": "splunk.table" } }, "dataSources": { "ds_IPGx8Y5Y": { "name": "timetoken", "options": { "enableSmartSources": true, "query": "| makeresults \n| eval earliest=$global_time.earliest|s$, latest=$global_time.latest|s$\n| eval earliest_epoch = IF(match(earliest,\"[0-9]T[0-9]\"),strptime(earliest, \"%Y-%m-%dT%H:%M:%S.%3N%Z\"),earliest), latest_epoch = IF(match(latest,\"[0-9]T[0-9]\"),strptime(latest, \"%Y-%m-%dT%H:%M:%S.%3N%Z\"),latest)" }, "type": "ds.search" }, "ds_KXR2SF6V": { "name": "Search_1", "options": { "query": "index=_internal earliest=$timetoken:result.earliest_epoch$ latest=$timetoken:result.latest_epoch$\n| stats count by host" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_V1oldcrB", "position": { "h": 80, "w": 310, "x": 20, "y": 20 }, "type": "block" }, { "item": "viz_2FDRkepv", "position": { "h": 260, "w": 460, "x": 1500, "y": 20 }, "type": "block" }, { "item": "viz_bhZcZ5Cz", "position": { "h": 380, "w": 1420, "x": 10, "y": 140 }, "type": "block" } ], "type": "absolute" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Real_captain , if you need to use timestamps in a lookup, you could use a time based lookup, or (better) store your data in a summary index that always has a timestamp, instead of managing filte... See more...
Hi @Real_captain , if you need to use timestamps in a lookup, you could use a time based lookup, or (better) store your data in a summary index that always has a timestamp, instead of managing filters and time formats. Ciao. Giuseppe
Hi @Real_captain  The issue is that the format of the $time_token.earliest$ value passed to strptime is not guaranteed to be %Y-%m-%dT%H:%M:%S. The time token earliest/latest values are typically ep... See more...
Hi @Real_captain  The issue is that the format of the $time_token.earliest$ value passed to strptime is not guaranteed to be %Y-%m-%dT%H:%M:%S. The time token earliest/latest values are typically epoch timestamps or relative time strings, not formatted date strings - e.g it might be 2025-06-05T07:45:00 but it could be "-d@d"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team  Can you please let me know why i am not able fetch the base_date in the dashoard using the below logic.  Please help me to fix this issue. Splunk query :  <input type="time" token="tim... See more...
Hi Team  Can you please let me know why i am not able fetch the base_date in the dashoard using the below logic.  Please help me to fix this issue. Splunk query :  <input type="time" token="time_token"> <label>TIME</label> <default> <earliest>-1d@d</earliest> <latest>@d</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query> | inputlookup V19_Job_data.csv | eval base_date = strftime(strptime("$time_token.earliest$", "%Y-%m-%dT%H:%M:%S"), "%Y-%m-%d") | eval expected_epoch = strptime(base_date . " " . expected_time, "%Y-%m-%d %H:%M") | eval deadline_epoch = strptime(base_date . " " . deadline_time, "%Y-%m-%d %H:%M") | join type=left job_name run_id [ search index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console system = EOCA host = ddebmfr.beprod01.eoc.net (( TERM(JobA) OR TERM(JobB) ) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") | eval Function = case(like(TEXT, "%ENDED - ABEND%"), "ABEND" , like(TEXT, "%ENDED - TIME%"), "ENDED" , like(TEXT, "%STARTED - TIME%"), "STARTED") | eval _time_epoch = _time | eval run_id=case( date_hour &lt; 14, "morning", date_hour &gt;= 14, "evening" ) | eval job_name=if(searchmatch("JobA"), "JobA", "JobB") | stats latest(_time_epoch) as job_time by job_name, run_id ] | eval buffer = 60 | eval status=case( isnull(job_time), "Not Run", job_time &gt; deadline_epoch, "Late", job_time &gt;= expected_epoch AND job_time &lt;= deadline_epoch, "On Time", job_time &lt; expected_epoch, "Early" ) | convert ctime(job_time) | table job_name, run_id, expected_time, expected_epoch , base_date, deadline_time, job_time, status</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest>
This is one huge search. Check each of the "component searches" on their own and see how they fare. Since some of them are raw event searches over a half a year worth of data, possibly through a sign... See more...
This is one huge search. Check each of the "component searches" on their own and see how they fare. Since some of them are raw event searches over a half a year worth of data, possibly through a significant subset of that data, I expect them to be slow just because you have to plow through all those events (and one of those subsearches has a very ugly field=* condition which makes Splunk have to parse every single event!). If you need that literal functionality from those searches, I see no other way than using some acceleration techniques - the searches themselves don't seem to be very much "optimizable". You might try to change some of them to tstats with PREFIX/TERM but only if your data fits the prerequisites.
Hi @tomapatan  If you structure your lookups so that the more generic match is lower down the lookup than your more specific match, and you have your "Max Matches" set to 1 then it should match the ... See more...
Hi @tomapatan  If you structure your lookups so that the more generic match is lower down the lookup than your more specific match, and you have your "Max Matches" set to 1 then it should match the more specific value first, else match the more generic one if not found. For example - this is my test lookup: You can see the more specific values are at the top. I have configured a lookup definition with WILDCARD matches and a max matches = 1 Then I run a search, if country/town isnt set I am setting to "Unknown" but it could be any value.   It maps to 999 because this is the generic value for host1 if town/country is not set. If I now set the country=UK:   I get a more specific value returned because it matches country=UK town=* If I do host=host999 it matches host* in the lookup and I get an interestingField value of GHI:   Remember that you have to pass all the fields you want to match on to the lookup command, and you should have the more generic matches lower down the lookup file.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
when I checked more in depth logs, I see perfect logs have less than 10000 lines where the logs which are truncating have 10001 lines. But I set truncated value to 50000 why this is not applying? 
Hi @Praz_123  In props.conf, use the following settings to extract the timestamp in your sourcetype:   [yourSourcetype] TIME_PREFIX = ^" TIME_FORMAT = %m/%d/%y %H:%M:%SZ Explanation... See more...
Hi @Praz_123  In props.conf, use the following settings to extract the timestamp in your sourcetype:   [yourSourcetype] TIME_PREFIX = ^" TIME_FORMAT = %m/%d/%y %H:%M:%SZ Explanation: TIME_PREFIX anchors the timestamp extraction immediately after the opening quote at the start of the line. TIME_FORMAT matches the date/time format: month/day/two-digit year, space, hour:minute:second, and a trailing "Z" for UTC. For more info check out https://docs.splunk.com/Documentation/Splunk/latest/Data/Configuretimestamprecognition If you are able to share a raw event (redacted if required) we can validate it but the above should hopefully work.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunklearner  You mention that the props/transforms are pushed to your Indexers, but is it also installed on the HF pulling the Akamai logs? Can you validate that the relevant props/transforms ... See more...
Hi @splunklearner  You mention that the props/transforms are pushed to your Indexers, but is it also installed on the HF pulling the Akamai logs? Can you validate that the relevant props/transforms with the TRUNCATE set to a higher-than-longest-event value are installed on the HF? $SPLUNK_HOME/bin/splunk btool props list sony_waf --debug If you run this on your HF you should see your TRUNCATE value to the expected high value. What length are your logs being truncated to? Your approach of using DS->CM->IDX is interesting...but I dont think this is the problem here if the Akamai logs are being pulled by a HF - Ultimately we need to ensure the HF has the props!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Few event logs are getting truncated while others are getting perfectly. We are using akamai add-on to pull logs to Splunk. HF (akamai input configured) ---> sent to indexers in DS all apps will be... See more...
Few event logs are getting truncated while others are getting perfectly. We are using akamai add-on to pull logs to Splunk. HF (akamai input configured) ---> sent to indexers in DS all apps will be there (where all props and transforms) which will be pushed to CM and from CM will be pushing to individual indexers. props.conf in DS (Ds --> CM --> IND) [sony_waf]  TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK = true EVENT_BREAKER_ENABLE = true SHOULD_LINEMERGE = False TRUNCATE = 50000   Few logs are getting perfectly. what to do now? Please suggest.
Hi  I need the same time in events and _time  while importing the data getting the time difference what to write in time_prefix field   
In terms of understanding which indexes are NOT being accessed. This is actually pretty challenging for a number of reaons, whilst its possible to look in the _audit index and see which indexes are b... See more...
In terms of understanding which indexes are NOT being accessed. This is actually pretty challenging for a number of reaons, whilst its possible to look in the _audit index and see which indexes are being searched, its pretty difficult to determine exactly which indexes have been searched for a number of reasons: Different users have access to different indexers, so using wildcards (e.g. index=*) can mean different indexes are accessed depending on roles. Macros/tags/eventtypes may contain index references and would need to be determined and expanded Different user roles may have different srchIndexesDefault which means they might not specify an index to search as rely on the defaults. Are you using Smartstore/Splunk Cloud? This may offer some slightly different approaches to this as we could look at smartstore cache activity to try and determine indexes accessed.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @spm807 , as I said, try using throttling in your alerts, it's the solution to your problem. Ciao. Giuseppe