All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for your response. How do I configure to limit an index to 80% and prevent data from being written in
It might be worth checking in index=main incase your "lastChanceIndex" is set to main. The other thing you could do is perform a search across all indexes  *however* I would generally advise against... See more...
It might be worth checking in index=main incase your "lastChanceIndex" is set to main. The other thing you could do is perform a search across all indexes  *however* I would generally advise against index=* searches - so do this sparingly!  | tstats count where index=* sourcetype="sourcetype::xxx" by index Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @arunkuriakose  Ensure that the severity is not being overwritten by any other SPL or macros within your search, as sometimes macros within ES can contain a lot of logic and may by overwriting yo... See more...
Hi @arunkuriakose  Ensure that the severity is not being overwritten by any other SPL or macros within your search, as sometimes macros within ES can contain a lot of logic and may by overwriting your custom severity. It might help if you could share you search here. When you look at the raw notables generated, is the severity showing as high or medium? (Not just in the UI/ES Dashboards)  
tl;dr; It should continue writing but will drop the oldest data if configured correctly. In Splunk, the index size can indeed exceed the maximum limit if not properly configured, which can result in... See more...
tl;dr; It should continue writing but will drop the oldest data if configured correctly. In Splunk, the index size can indeed exceed the maximum limit if not properly configured, which can result in data deletion or suspending data writing. The critical settings to monitor are maxTotalDataSizeMB, frozenTimePeriodInSecs, homePath.maxDataSizeMB, and coldPath.maxDataSizeMB. maxTotalDataSizeMB: This setting specifies the maximum overall size (in MB) allowed for an index. When this limit is reached, Splunk automatically triggers a process that rolls the oldest data to the "frozen" state, which by default is set to deletion if coldToFrozenScript or coldToFrozenDir settings aren't configured. frozenTimePeriodInSecs: This defines the timeframe data can remain in the index before being frozen. Once the time elapses, the data is typically deleted unless alternative archiving options are specified. homePath.maxDataSizeMB: This setting controls the maximum size of the home path, encompassing hot and warm buckets. If this threshold is surpassed, older buckets move to cold storage. coldPath.maxDataSizeMB: This defines the maximum size for cold storage. Exceeding this limit results in the freezing of older buckets. If an index is reaching its limits frequently, consider evaluating your data volumes, review these configurations, and ensure that your Splunk setup can handle projected growth, to avoid unintended data loss or performance degradation. Splunk will start rolling data to frozen when either the  frozenTimePeriodInSecs or maxDataSizeMB is met, whichever comes first. This could mean that even if you expect 30 days of data, if there is only enough disk space for 10 days then it will start rolling to frozen (which may mean deletion) sooner than expected. If your disk space drops below the value set in server.conf/[diskUsage]/minFreeSpace (defaults to 5000mb) then it will stop letting you execute searches. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Will the index size exceed the maximum to delete data or stop writing data
Please provide more detail as to what exactly you are doing; your current description is too vague to determine how to advise you.
yes. correct. if i remove | search httpcode = 500, i can get the results
If you remove | search httpcode=500  from the python, does it return all the data as expected from the parent index?
| eval Comp=case(Dept=="IT","XXXX",Dept=="Bus","YYYY",Dept=="Pay","ZZZZ")
We have json fields to be auto extracted onto Splunk. We have some non json data to be removed and then auto extract the data. So I given following props.conf on my indexers - [sony_waf]  TIME_PRE... See more...
We have json fields to be auto extracted onto Splunk. We have some non json data to be removed and then auto extract the data. So I given following props.conf on my indexers - [sony_waf]  TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S LINE_BREAKER=([\r\n]+) SEDCMD-removeheader=s/^[^\{]*//g SHOULD_LINEMERGE = False INDEXED_EXTRACTIONS = JSON TRUNCATE = 20000   and this props.conf on my SH: [sony_waf] KV_MODE = none AUTO_KV_JSON = false   Props.conf on my UF: (which is there from before) [sony_waf] NO_BINARY_CHECK = true EVENT_BREAKER_ENABLE = true When I done this, duplicate events are populating. When I remove my INDEXED_EXTRACTIONS from indexers and keep it in UF props.conf... logs are not being ingested. Tried to give KV_MODE = json by removing KV_MODE and AUTO_KV_JSON in SH still the same duplication. completely confused here. Now even though I remove everything what I have given still duplicate logs coming. Checked in log path from source no duplicate logs are showing. even I have given crcsalt still the same issue. Please guide me to give the correct config in correct place...
import splunklib.client as client import splunklib.results as results   service = client.connect(         host="ipaddress",         port=portnumber,         username="username",         passwo... See more...
import splunklib.client as client import splunklib.results as results   service = client.connect(         host="ipaddress",         port=portnumber,         username="username",         password="password",         scheme = "https"         )                kwargs_blockingsearch = {"exec_mode": "normal",                             "earliest_time": "-15m",                             "latest_time": "now",                             "enable_lookups": "true"}     searchquery_blocking = '''search index=sample source="*sample*" AND host="v*lu*" OR host="s*mple*" | search httpcode=500 '''     job = service.jobs.create(searchquery_blocking, **kwargs_blockingsearch)       while True:         while not job.is_ready():             pass         if job["isDone"] =="1":             break     results = job.results(**{"output_mode": "json"}) print(results)
Hi, I am displaying a table as a result from the Search, however I would like to add an additional column with static values based on the existing column. For example, S.No    Name    Dept 1    ... See more...
Hi, I am displaying a table as a result from the Search, however I would like to add an additional column with static values based on the existing column. For example, S.No    Name    Dept 1          Andy      IT 2          Chris      Bus 3          Nike        Pay   In the above table, I would like to add another column called Company and map value based on Dept column as below If Dept is IT, then the value for Company as XXXX If Dept is Bus, then the value for Company is YYYY If Dept is Pay, then the value for Company is ZZZZ and the final table should look like S.No    Name    Dept    Comp 1          Andy      IT           XXXX 2          Chris      Bus       YYYY 3          Nike        Pay       ZZZZ   @ITWhisperer Dashboard 
@Meett Hello, thank you for your kind reply. I am glad to hear that you know the case that plug-in is used with v14.2. I'll be researching more and find what to do next. 
Hi @splunk_user_99  You can get network/log data from Team Red/Blue exercises "Boss of the SOC" found at https://github.com/splunk/securitydatasets These come in Splunk ready format for you to add ... See more...
Hi @splunk_user_99  You can get network/log data from Team Red/Blue exercises "Boss of the SOC" found at https://github.com/splunk/securitydatasets These come in Splunk ready format for you to add into your instance and work on.
Thanks you for your explanation, it work very well.   KR Théo
Hi @harishsplunk7  I can't see that as an option so I don't think so. You could add an action to email you if the search fails so you can investigate.
Hi @sekarjegan93  When you add a visualisation, it's given an auto generated name such as "viz_XQInZkvE". The code snippet you shared does not include an element ID. Did you change the name of this... See more...
Hi @sekarjegan93  When you add a visualisation, it's given an auto generated name such as "viz_XQInZkvE". The code snippet you shared does not include an element ID. Did you change the name of this element in the code? Maybe you deleted that element? Are you trying to refresh a single visualusation or the whole dashboard?
Hi @zksvc  Looks like a binary file was read there. Have you followed the steps here https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/MonitorWindowseventlogdata ?
Thank you so much for the explanation. This make so much sense when you describe it (and something I should be able to think of my self). 
hi @BalajiRaju  Can you provide the base search you're using in Splunk and the Python code for us to see?