All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm using Splunk Enterprise 9 on Windows Server 2019 and monitoring a simple log file that has CRLF lines endings and is encoded as UTF8. My inputs stanza is as follows:   [monitor://c:\windows\deb... See more...
I'm using Splunk Enterprise 9 on Windows Server 2019 and monitoring a simple log file that has CRLF lines endings and is encoded as UTF8. My inputs stanza is as follows:   [monitor://c:\windows\debug\test.log] disabled = 0 sourcetype = my_sourcetype index=test   Consider two consectuive lines in the log file   Some data 1 Some data 2   When indexed this creates a single event rather than my expectation of 2 events. Where am I going wrong?    
Hi @krutika_ag, you should know where these logs come from, or at least what system produced them. If not, you could try using the Splunk automatic recognition but I don't like this solution becaus... See more...
Hi @krutika_ag, you should know where these logs come from, or at least what system produced them. If not, you could try using the Splunk automatic recognition but I don't like this solution because the error margin is very large. You could open the files and take some strings to search on internet what could be the technologu that produced them and then you could search the relative sourcetype. Ciao. Giuseppe
Hi Ashish Does using the severity value not work, is there a reason you require violationStatus? Can you share a screenshot from within the controller to which violationStatus you are trying to get... See more...
Hi Ashish Does using the severity value not work, is there a reason you require violationStatus? Can you share a screenshot from within the controller to which violationStatus you are trying to get into the event?
I need help in understanding that what sourcetype would be ideal to parse logs of this File type  
I'm experiencing exactly the same error. 
@Mario.Morelli - Please respond, As i said Status="${latestEvent.violationStatus}" is not working.  Do we have any other variable to get the data
I reached out to the development team and let them know your concerns, I have the same issue with an app I am developing as well. Not sure there will be any changes, but at least they know about it. 
Hello All,   I am currently testing upgrading from Splunk Enterprise version 9.0.4 to 9.2.0.1 but get the below error.    Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-... See more...
Hello All,   I am currently testing upgrading from Splunk Enterprise version 9.0.4 to 9.2.0.1 but get the below error.    Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py",  line 39, in <module> from splunk.rcUtils import makeRestCall, CliArgError, NoEndPointError, InvalidStatusCodeError MemoryError Error running pre-start tasks.    I will add that there are a few more lines to the error but this is an air-gapped environment and hoping there is no need to manually type it all out   TIA Leon
Hi at all, I have to track Splunk modifications (Correlation Searches,, conf files, etc...). I tried to use the _configtracker index that is complete and answers to all my requirements, but it does... See more...
Hi at all, I have to track Splunk modifications (Correlation Searches,, conf files, etc...). I tried to use the _configtracker index that is complete and answers to all my requirements, but it doesn't track the user that does an action. How could do this? Thank you for your help. Ciao. Giuseppe
  Hi,  I'm receiving the following error message:  Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. I am tring to create the seach via REST A... See more...
  Hi,  I'm receiving the following error message:  Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. I am tring to create the seach via REST API. Is there something special that i need to know about API calls? Via the UI -  the search works. Thanks!
Hi as that probably seems to be UUID string, you could make more strict regex to match it like [0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12} Time by time your data could contains some data whic... See more...
Hi as that probably seems to be UUID string, you could make more strict regex to match it like [0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12} Time by time your data could contains some data which could match e.g. [^\"]+ but it's still e.g. UUID. Also those regex use different amount of resources. With only some events this is usually not an issue, but if/when you have e.g. billions of events then even 1ms start to make difference. You could look that e.g. with regex101.com.  This happened quite often e.g. with SSN + bank accounts etc. So look your data and use expression which match best for your data! r. Ismo 
If you want just format those on display but not convert those to string then you should use command fielformat instead of eval inside foreach loop. I'm not sure if bug in fieldformat has already fix... See more...
If you want just format those on display but not convert those to string then you should use command fielformat instead of eval inside foreach loop. I'm not sure if bug in fieldformat has already fixed or not. At least some earlier versions it didn't work correctly in all cases inside foreach loop. If it didn't work then just use additional field name with eval and you could use original for calculations later.
Disclaimer: This is in no way supported and will break Splunk support. Don't use this in a production environment. There are better ways to solve this, I'm just not smart enough to figure them out... See more...
Disclaimer: This is in no way supported and will break Splunk support. Don't use this in a production environment. There are better ways to solve this, I'm just not smart enough to figure them out. This hack breaks at the next update - allways, so extra care should be taken.    think this would be very nice if Splunk could support the desired behavior, maybee as an option or configuration.     There are three places that influences the maintenance window calculation: service_health_metrics_monitor: Original: | mstats latest(alert_level) AS alert_level WHERE `get_itsi_summary_metrics_index` AND `service_level_max_severity_metric_only` by itsi_kpi_id, itsi_service_id, kpi, kpi_importance | lookup kpi_alert_info_lookup alert_level OUTPUT severity_label AS alert_name | `mark_services_in_maintenance` | `reorganize_metrics_healthscore_results` | gethealth | `get_info_time_without_sid` | lookup service_kpi_lookup _key AS itsi_service_id OUTPUT sec_grp AS itsi_team_id | search itsi_team_id=* | fields - alert_severity, color, kpi, kpiid, serviceid, severity_label, severity_value | rename health_score AS service_health_score | eval is_null_alert_value=if(service_health_score="N/A", 1, 0), service_health_score=if(service_health_score="N/A", 0, service_health_score) This could be changed to: Modified: | mstats latest(alert_level) AS alert_level WHERE `get_itsi_summary_metrics_index` AND `service_level_max_severity_metric_only` by itsi_kpi_id, itsi_service_id, kpi, kpi_importance | lookup kpi_alert_info_lookup alert_level OUTPUT severity_label AS alert_name | `mark_services_in_maintenance` | `reorganize_metrics_healthscore_results` | gethealth | `get_info_time_without_sid` | lookup service_kpi_lookup _key AS itsi_service_id OUTPUT sec_grp AS itsi_team_id | fields - alert_severity, color, kpi, kpiid, serviceid, severity_label, severity_value | rename health_score AS service_health_score | `mark_services_in_maintenance` | eval is_null_alert_value=if(service_health_score="N/A", 1, 0), service_health_score=if(service_health_score="N/A", 0, service_health_score), alert_level=if(is_service_in_maintenance=1 AND alert_level>-2,-2,alert_level) I have added an extra call to macro "mark_services_in_maintenance" and expanded the last eval to set  alert_level to maintenance.   service_health_monitor: Original: `get_itsi_summary_index` host=atp-00pshs* `service_level_max_severity_event_only` | stats latest(urgency) AS urgency latest(alert_level) AS alert_level latest(alert_severity) as alert_name latest(service) AS service latest(is_service_in_maintenance) AS is_service_in_maintenance latest(kpi) AS kpi by kpiid, serviceid | lookup service_kpi_lookup _key AS serviceid OUTPUT sec_grp AS itsi_team_id | search itsi_team_id=* | gethealth | `gettime`   Could be changed to: Modified: `get_itsi_summary_index` `service_level_max_severity_event_only` | stats latest(urgency) AS urgency latest(alert_level) AS alert_level latest(alert_severity) as alert_name latest(service) AS service latest(is_service_in_maintenance) AS is_service_in_maintenance latest(kpi) AS kpi by kpiid, serviceid | gethealth | `gettime` | `mark_services_in_maintenance` | eval alert_level=if(is_service_in_maintenance=1 AND alert_level>-2,-2,alert_level), color=if(is_service_in_maintenance=1 AND alert_level=-2,"#5C6773",color), severity_label=if(is_service_in_maintenance=1 AND alert_level=-2,"maintenance",severity_label), alert_severity=if(is_service_in_maintenance=1 AND alert_level=-2,"maintenance",alert_severity)  Again an extra call to macro "mark_services_in_maintenance"  and the eval at the bottom to set the service in maintenance.   Those to will ensure the service appears in maintenance in the "Service Anayser" and "Glasstables" I think it also takes care of "Deep Dives" but they don't appear to turn dark grey. In order to ensure correct calculation we also have to make changes in "gethealth" search command. The Python script that is of interest is located here: "SPLUNK_HOME/etc/apps/SA-ITOA/lib/itsi/searches/compute_health_score.py" Search for "If a dependent service is disabled, its health should not affect other services" You should get som code that look like this: for depends_on in service.get('services_depends_on', []): # If a dependent service is disabled, its health should not affect other services dependent_service_id = depends_on.get('serviceid') dependency_enabled = [ svc.get('enabled', 1) for svc in self.all_services if dependent_service_id == svc.get('_key') ] if len(dependency_enabled) == 1 and dependency_enabled[0] == 0: continue for kpi in depends_on.get('kpis_depending_on', []): # Get urgencies for dependent services What I want is to replicate the behavior from a disbled service for depends_on in service.get('services_depends_on', []): # If a dependent service is disabled, its health should not affect other services dependent_service_id = depends_on.get('serviceid') dependency_enabled = [ svc.get('enabled', 1) for svc in self.all_services if dependent_service_id == svc.get('_key') ] if len(dependency_enabled) == 1 and dependency_enabled[0] == 0: continue # If a dependent service is in maintenance, its health should not affect other services - ATP maintenance_service_id = depends_on.get('serviceid') try: isinstance(self.maintenance_services, list) except: self.maintenance_services = None if self._is_service_currently_in_maintenance(maintenance_service_id): self.logger.info('ATP - is service in maintenance %s', self._is_service_currently_in_maintenance(maintenance_service_id)) continue for kpi in depends_on.get('kpis_depending_on', []): # Get urgencies for dependent services So I added a call to an existing function _is_service_currently_in_maintenance, unfortunately this fails, as the table maintenance_service is un-initialized (that is the try: except: block), now it just a simple check if the service we depend on is in maintenance and if it is, we break out with continue.   Again, this is NOT supported in any way and should not be used in production and will break at the next update. Kind regards
Thanks for your help. Solution is working as expected. 
Yes, I've hyphens and a full stop on the hostname that needs to be considered.  So far identified those 4 patterns and that should be it.
Is the data stored same way on both environments? Like are there same indexes, source types, props / transforms etc. on both environment? Are the IO resources equally on both nodes? Are there running... See more...
Is the data stored same way on both environments? Like are there same indexes, source types, props / transforms etc. on both environment? Are the IO resources equally on both nodes? Are there running any other stuff that splunk on those nodes? You should setup MC on both nodes and look from it what there are happening. Start with health check part. It will tell if there are some configurations which are not based on Splunk's requirements.
We have installed "Proofpoint TAP Modular Input" add-on on victoria search head and created input (api call) to fetch the logs. For the first run, it fetched one event and from next runs it is throwi... See more...
We have installed "Proofpoint TAP Modular Input" add-on on victoria search head and created input (api call) to fetch the logs. For the first run, it fetched one event and from next runs it is throwing an error: " pp_tap_input: When trying to retrieve the last poll time, multiple kvstore records were found". We tried creating new input and observed the same behavior. 
Hi that .conf presentation which @kiran_panchavat are referring is excellent even it's little bit old and don't contains all new stuff like S2 (Splunk Smart Store). Please read it and also some othe... See more...
Hi that .conf presentation which @kiran_panchavat are referring is excellent even it's little bit old and don't contains all new stuff like S2 (Splunk Smart Store). Please read it and also some other answers which are talking bout that same issue. Shortly, You cannot ensure that events are moved into cold storage based on age! There are no parameter which define this for warm bucket. Moving warm to cold is defined base on bucket count not based on time. frozenTimePerioInSecs is used for moving cold buckets to frozen (archiving those outside of splunk or remove those as default action). r. Ismo
Hi one way it use command transaction like | makeresults format=csv data="_time,user,action 1710320306,u09,unlocked 1710320356,u09,locked 1710320360,u10,unlocked 1710320363,u10,locked 1710320369,u1... See more...
Hi one way it use command transaction like | makeresults format=csv data="_time,user,action 1710320306,u09,unlocked 1710320356,u09,locked 1710320360,u10,unlocked 1710320363,u10,locked 1710320369,u11,unlocked 1710320374,u11,locked 1710320379,u09,unlocked 1710320384,u09,locked 1710320389,u10,unlocked 1710321119,u10,locked 1710321126,u11,unlocked 1710322754,u11,locked 1710322760,u09,unlocked 1710324580,u09,locked 1710326550,u09,unlocked 1710328364,u09,locked" | transaction startswith="action=unlocked" endswith="action=locked" user | fieldformat duration=tostring(duration,"duration") r. Ismo
Hello! I have a log that shows locking/unlocking PCs: 1710320306,u09,unlocked 1710320356,u09,locked 1710320360,u10,unlocked 1710320363,u10,locked 1710320369,u11,unlocked 1710320374,u11,locke... See more...
Hello! I have a log that shows locking/unlocking PCs: 1710320306,u09,unlocked 1710320356,u09,locked 1710320360,u10,unlocked 1710320363,u10,locked 1710320369,u11,unlocked 1710320374,u11,locked 1710320379,u09,unlocked 1710320384,u09,locked 1710320389,u10,unlocked 1710321119,u10,locked 1710321126,u11,unlocked 1710322754,u11,locked 1710322760,u09,unlocked 1710324580,u09,locked 1710326550,u09,unlocked 1710328364,u09,locked The first field - unix timestamp, second - user, third - action. I need to get a statistics for PCs beeing unlocked by users. So it will the sum of seconds between unlocked-locked actions for each user. Please, help with search query