All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

14 years later, I'm looking for a solution to the same problem. Has anyone published the code/config? Using FIX 4.4/FIX5 in my case.
Hi This question are asked quite often. You find many explanations from community quite easily. I add here some posts which you should read to understand better the problematic of your needs. htt... See more...
Hi This question are asked quite often. You find many explanations from community quite easily. I add here some posts which you should read to understand better the problematic of your needs. https://community.splunk.com/t5/Splunk-Search/How-can-I-find-the-data-retention-and-indexers-involved/m-p/645374 https://community.splunk.com/t5/Deployment-Architecture/Hot-Warm-Cold-bucket-sizing-How-do-I-set-up-my-index-conf-with/m-p/634691 https://community.splunk.com/t5/Deployment-Architecture/Index-rolling-off-data-before-retention-age/m-p/684799 But shortly what those means when we are looking your request. There are many attributes which you need to use to achieve your target, but I'm quite sure that you cannot use those so that you will get 100% what you are requesting.  @livehybrid already answer to you one example for starting point.  The 1st issue is that you cannot force warm -> cold transition by time the only options are amount of buckets and size of homePath also if you are using volumes, then total volume size are used, but usually you have also some other indexes on the same volume.  And those are not depending on time, just # bucket and size of hot+warm bucket. The 2nd issue is that depending on data volumes and amount of indexers it will be even harder to control the amount of buckets. All these configurations are depending on one indexer. There are no relations to other indexers and indexes what those have. And actually it's not even indexer dependent it's dependent on amount of indexing pipelines . So if you have e.g. 10 indexer all those parameters which @livehybrid present must multiply 10 and if you have e.g. 2 ingesting pipelines per indexer you must multiply previous result by 2. And as normally each indexer/pipeline have 3 open hot bucket you must again multiply previous result by 3 or if you have change that bucket amount then with some other value. This means that when you are estimating needed amount of warm buckets to achieve that 12h time in hot you must divide your data by (3 * # pipeline * #indexers) to get estimate how many maxWarmDBCount you should use.  And to get this working correctly this means that your source system events must spread equally on all your indexers to calculate that value correctly. Of course this expecting that your data volume is flat for all time.  If your data volumes follow eg. sin function then it's quite obvious that this cannot work. One more thing is that if your events are not continuous by time then (e.g time by time there are some old logs or some events in future) those triggers create a new bucket and close old hot even it's not full. I suppose that above are not all aspects which one must take care of to achive what you are asking. You could try to achieve your objective, but don't surprise if you cannot get it to work. r. Ismo
Hi @zksvc  Please could you share your code for doing this check? I suspect that you are counting the number of categories returned rather than the counts in each category - e.g. in that specific ex... See more...
Hi @zksvc  Please could you share your code for doing this check? I suspect that you are counting the number of categories returned rather than the counts in each category - e.g. in that specific example you have "malicious" and "malware". Check that what you're counting isnt an array of objects and/or share you config/code and I'd be happy to look into it further.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi,  I have tried this without the proces_path= assignment since that is in the prefix, so just $Get_Process_Path|s$ Here is a snippet:  <input type="text" token="Get_Process_Path"> <label>Pro... See more...
Hi,  I have tried this without the proces_path= assignment since that is in the prefix, so just $Get_Process_Path|s$ Here is a snippet:  <input type="text" token="Get_Process_Path"> <label>Process Name or Path</label> <prefix>process_path="*</prefix> <suffix>*"</suffix> </input> <query>index=windows EventCode=4688 $Get_Process_Path|s$ This will break the search, I believe it's because |s is wrapping additional quotes around what is in the prefix.  But I need both of those things to fix the individual issues. 
Hi @BRFZ  Configure the index in indexes.conf as follows to enforce your requirements: Set frozenTimePeriodInSecs to 86400 (24 hours). Set maxWarmDBCount to a low value and maxHotSpanSecs to 4320... See more...
Hi @BRFZ  Configure the index in indexes.conf as follows to enforce your requirements: Set frozenTimePeriodInSecs to 86400 (24 hours). Set maxWarmDBCount to a low value and maxHotSpanSecs to 43200 (12 hours) so that buckets roll to warm quickly. Set maxWarmDBCount, maxDataSize, or other thresholds to force buckets to cold after 12 hours. Configure a coldToFrozenDir to archive (not delete) after cold.   Try this as an example indexes.conf: [test] homePath = $SPLUNK_DB/test/db coldPath = $SPLUNK_DB/test/colddb thawedPath = $SPLUNK_DB/test/thaweddb # set bucket max age to 12h (hot→warm) maxHotSpanSecs = 43200 # default size, can reduce for faster bucket rolling # maxDataSize = auto # keep small number of warm buckets, moves oldest to cold # maxWarmDBCount = 1 # total retention 24h frozenTimePeriodInSecs = 86400 # archive to this path, not delete coldToFrozenDir = /archive/test With this setup, data will move from hot→warm after 12h (due to maxHotSpanSecs), and oldest warm buckets will be rolled to cold (enforced by low maxWarmDBCount). Data will be kept for 24h in total before being archived.   The number of buckets (maxWarmDBCount, etc.) should be kept low to ensure data moves through states quickly for such a short retention. Splunk is optimised for longer retention; very short retention and frequent bucket transitions can increase management overhead, its generally advised to not have small buckets due to this however due to the small retention period you shouldnt end up with too many buckets here? Other things to remember: If you use coldToFrozenDir, ensure permissions and disk space are sufficient at the archive destination. Test carefully, as low maxWarmDBCount and short maxHotSpanSecs may result in more buckets than usual and performance impacts. If you want to restore archived data, it must be manually thawed.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I've obtained this information from VirusTotal, and I want to create a playbook to check IP reputation and retrieve the results. I want to make a decision where if the result is greater than 0, it wi... See more...
I've obtained this information from VirusTotal, and I want to create a playbook to check IP reputation and retrieve the results. I want to make a decision where if the result is greater than 0, it will write a note stating 'It's malicious from VirusTotal.' You can see this example: Community Score or information like '4/94 security vendors flagged.' I want to compare it according to VirusTotal from the playbook. However, when I run it, it only shows 'detected urls: 2.' Can someone explain this?
Hello, I'm looking to set up a log retention policy for a specific index, for example index=test. Here's what I'd like to configure: - Total retention time = 24 hours - First 12 hours in hot+warm... See more...
Hello, I'm looking to set up a log retention policy for a specific index, for example index=test. Here's what I'd like to configure: - Total retention time = 24 hours - First 12 hours in hot+warm, then - Next 12 hours cold. - After that, the data should be archived (not deleted). How exactly should I configure this please? Also does the number of buckets need to be adjusted to support this setup properly on such a short timeframe? Thanks in advance for your help.  
Please can you confirm the field names in your OS lookup? Thanks
So you terminated old node and then a new one bring up. But how about splunk in these cases? How it was installed and how about configurations and old data or was this totally clean installation which... See more...
So you terminated old node and then a new one bring up. But how about splunk in these cases? How it was installed and how about configurations and old data or was this totally clean installation which then added to cluster or was it old installation with GUI, <index>.dat files + real indexes?
index=endpoint_defender source="AdvancedHunting-DeviceInfo" | rex field=DeviceName "(?<DeviceName>\w{3}-\w{1,})." | eval DeviceName=upper(DeviceName) | lookup snow_os.csv DeviceName output OS Buil... See more...
index=endpoint_defender source="AdvancedHunting-DeviceInfo" | rex field=DeviceName "(?<DeviceName>\w{3}-\w{1,})." | eval DeviceName=upper(DeviceName) | lookup snow_os.csv DeviceName output OS BuildNumber Version | lookup OS_Outdated.csv OperatingSystems as OS BuildNumber Version OUTPUT Outdated | fillnull value=false outdated | table DeviceName OS BuildNumber Version Outdated this is i am using but the problem is this line | lookup OS_Outdated.csv OperatingSystems as OS BuildNumber Version OUTPUT Outdated is not generating any results  
i am doing this but outdated is showing nothing  
hi, the path you search is : resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue but in your json data example, the path to stringValue is : resourceSpans{}.resource{}.attributes{... See more...
hi, the path you search is : resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue but in your json data example, the path to stringValue is : resourceSpans{}.resource{}.attributes{}.value.stringValue May be it help.
Hi @kn450, With respect to your prior comments: "... it's important to note that the queries used are not written in Splunk’s native SPL language; instead, they rely on Elasticsearch queries. This l... See more...
Hi @kn450, With respect to your prior comments: "... it's important to note that the queries used are not written in Splunk’s native SPL language; instead, they rely on Elasticsearch queries. This limits the integration with some of Splunk’s core functionalities and does not provide the desired level of efficiency in terms of performance and deep analysis." I use custom generating commands to run Elasticsearch searches, and I treat the results as if they came from a similar base SPL command. I agree the ideal would be a virtual index or federated search that compiles a search command into equivalent Elasticsearch Query DSL, for example, but that isn't presently feasible. What Splunk functionality would you like to use with custom search commands, including those from apps on Splunkbase, that you cannot use? Do you have specific use cases in mind?
Unfortunately,  the issue is back on 2 indexers again.
The nodes are in a scaling group, they were replaced one by one. Everything worked without any issues in a different environment.
Hi @kn450 , @Saba    I have encountered this same issue a few days back and solved it by running a playbook to do a splunk search to create the event_id from the data in my artifact. The macro `get... See more...
Hi @kn450 , @Saba    I have encountered this same issue a few days back and solved it by running a playbook to do a splunk search to create the event_id from the data in my artifact. The macro `get_event_id_meval` is used to create the event id from the indexer_guid, index and event_hash fields, separated by "@@", i.e. indexer_guid@@index@@event_hash. Is this the best way? Probably not, but it does work and I can always update it should I find a better solution. See the search below. index=notable search_name="<your_search_name>" firstTime="xxxx" lastTime="xxxx" | eval `get_event_id_meval` | fields event_id  
I see, okay - in that case I think the below might work for you? This works by setting the fieldName into the value so you dont need something=$token$ you just do $token$ as it already contains somet... See more...
I see, okay - in that case I think the below might work for you? This works by setting the fieldName into the value so you dont need something=$token$ you just do $token$ as it already contains something= within it: All: <form version="1.1"> <label>Demo</label> <fieldset submitButton="false"> <input type="dropdown" token="testToken" searchWhenChanged="true"> <label>Test. Token</label> <choice value="*">All</choice> <choice value="&quot;resourceSpans{}.resource.attributes{}.value.stringValue&quot;=&quot;CONSO_ABAQ | 31/03/2016 | 23&quot;">CONSO_ABAQ | 31/03/2016 | 23 (Static)</choice> <fieldForLabel>obj</fieldForLabel> <fieldForValue>option</fieldForValue> <search> <query>| makeresults | eval obj="CONSO_ABAQ | 31/03/2016 | 22" | eval option="\"resourceSpans{}.resource.attributes{}.value.stringValue\"=\"".obj."\""</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset> <row> <panel> <table> <search> <query>|makeresults | eval _raw=json_set("{}","resourceSpans{}.resource.name.stringValue","Testing") |append [makeresults | eval _raw=json_set("{}","resourceSpans{}.resource.attributes{}.value.stringValue","CONSO_ABAQ | 31/03/2016 | 22")] |spath|search $testToken$</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Jacob_Edelen , Schedule PDF delivery was added in 9.3. You can check out our "What's new in Dashboard Studio" docs to see new features shipped with each version: https://docs.splunk.com/Documenta... See more...
Hi @Jacob_Edelen , Schedule PDF delivery was added in 9.3. You can check out our "What's new in Dashboard Studio" docs to see new features shipped with each version: https://docs.splunk.com/Documentation/Splunk/9.4.2/DashStudio/WhatNew#:~:text=Schedule%20PDF%20and%20PNG%20exports%20of%20dashboards
Will this be available in version 9.4.2? We are upgrading in the coming weeks and have people itching for it.
Why are you using  proces_path=$Get_Process_Path|s$ if $Get_Process_Path$ already has the process_path="* prefix? Is this what is causing the issue?