All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sireesha.vadlamuru, I'm reaching out again looking for some clarity here on what help you need. 
I have an issue with adding indexed fields to each of the new (splatted) sourcetype: Configuration that "duplicated" indexed fields for each sourcetype: Now I see fields: indexedfileds1, indexedfil... See more...
I have an issue with adding indexed fields to each of the new (splatted) sourcetype: Configuration that "duplicated" indexed fields for each sourcetype: Now I see fields: indexedfileds1, indexedfileds2 and indexedfileds3 as 200%, For example: indexedfields1 values: valuie1 150% value2 50% props.conf [MAIN SOURCE] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-changesourcetype = sourcetype1, sourcetype2 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3 [sourcetype1] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 [sourcetype2] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 transforms.conf [indexedfield1] REGEX= FORMAT= WRITE_META= [indexedfield2] REGEX= FORMAT= WRITE_META= [indexedfield3] REGEX= FORMAT= WRITE_META= [sourcetype1] DEST_KEY-MetaData:Sourcetype REGEX = some regex FORMAT = sourcetype::sourcetype1 [sourcetype2] DEST_KEY-MetaData:Sourcetype REGEX = some regex FORMAT = sourcetype::sourcetype2   I thought to move the indexed fields to each of the new sourcetype but then I see no indexed fields. Check with | tstats count props.conf [MAIN SOURCE] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-changesourcetype = sourcetype1, sourcetype2 [sourcetype1] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3 [sourcetype2] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3   What is the needed configuration to see indexed fields per sourcetype, w/o showing 200% Thanks
Hi @Zoltan.Gutleber, Given that this post is a few years old, it's unlikely to get a reply from the original poster. At this point, it might be best to reach out to AppD Support: How do I submit a ... See more...
Hi @Zoltan.Gutleber, Given that this post is a few years old, it's unlikely to get a reply from the original poster. At this point, it might be best to reach out to AppD Support: How do I submit a Support ticket? An FAQ 
Hi, We have 3 indexers and 1 search head (replication factor = 3).I need to permanently remove one indexer  What is the correct procedure: 1. Change replication factor = 2 and then remove inde... See more...
Hi, We have 3 indexers and 1 search head (replication factor = 3).I need to permanently remove one indexer  What is the correct procedure: 1. Change replication factor = 2 and then remove indexer OR 1. Remove indexer and after that change the replication factor to 2   Thanks
Hi, I don't think it exists, I've inserted this question which also interests me as an idea for a proposal for future developments. You could add a vote to my idea https://ideas.splunk.com/ideas/ESS... See more...
Hi, I don't think it exists, I've inserted this question which also interests me as an idea for a proposal for future developments. You could add a vote to my idea https://ideas.splunk.com/ideas/ESSID-I-392 so that it is more visible and taken into consideration. A thousand thanks
Hi @lakshman239  Thanks for the info. Can you provide some more insights? What are the additional rules? I have a similar request and I am able to telnet on 1521 port from splunk, but still the ... See more...
Hi @lakshman239  Thanks for the info. Can you provide some more insights? What are the additional rules? I have a similar request and I am able to telnet on 1521 port from splunk, but still the connectivity says it is blocked by firewall while submitting..
I'm using Splunk Enterprise 9 on Windows Server 2019 and monitoring a simple log file that has CRLF lines endings and is encoded as UTF8. My inputs stanza is as follows:   [monitor://c:\windows\deb... See more...
I'm using Splunk Enterprise 9 on Windows Server 2019 and monitoring a simple log file that has CRLF lines endings and is encoded as UTF8. My inputs stanza is as follows:   [monitor://c:\windows\debug\test.log] disabled = 0 sourcetype = my_sourcetype index=test   Consider two consectuive lines in the log file   Some data 1 Some data 2   When indexed this creates a single event rather than my expectation of 2 events. Where am I going wrong?    
Hi @krutika_ag, you should know where these logs come from, or at least what system produced them. If not, you could try using the Splunk automatic recognition but I don't like this solution becaus... See more...
Hi @krutika_ag, you should know where these logs come from, or at least what system produced them. If not, you could try using the Splunk automatic recognition but I don't like this solution because the error margin is very large. You could open the files and take some strings to search on internet what could be the technologu that produced them and then you could search the relative sourcetype. Ciao. Giuseppe
Hi Ashish Does using the severity value not work, is there a reason you require violationStatus? Can you share a screenshot from within the controller to which violationStatus you are trying to get... See more...
Hi Ashish Does using the severity value not work, is there a reason you require violationStatus? Can you share a screenshot from within the controller to which violationStatus you are trying to get into the event?
I need help in understanding that what sourcetype would be ideal to parse logs of this File type  
I'm experiencing exactly the same error. 
@Mario.Morelli - Please respond, As i said Status="${latestEvent.violationStatus}" is not working.  Do we have any other variable to get the data
I reached out to the development team and let them know your concerns, I have the same issue with an app I am developing as well. Not sure there will be any changes, but at least they know about it. 
Hello All,   I am currently testing upgrading from Splunk Enterprise version 9.0.4 to 9.2.0.1 but get the below error.    Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-... See more...
Hello All,   I am currently testing upgrading from Splunk Enterprise version 9.0.4 to 9.2.0.1 but get the below error.    Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py",  line 39, in <module> from splunk.rcUtils import makeRestCall, CliArgError, NoEndPointError, InvalidStatusCodeError MemoryError Error running pre-start tasks.    I will add that there are a few more lines to the error but this is an air-gapped environment and hoping there is no need to manually type it all out   TIA Leon
Hi at all, I have to track Splunk modifications (Correlation Searches,, conf files, etc...). I tried to use the _configtracker index that is complete and answers to all my requirements, but it does... See more...
Hi at all, I have to track Splunk modifications (Correlation Searches,, conf files, etc...). I tried to use the _configtracker index that is complete and answers to all my requirements, but it doesn't track the user that does an action. How could do this? Thank you for your help. Ciao. Giuseppe
  Hi,  I'm receiving the following error message:  Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. I am tring to create the seach via REST A... See more...
  Hi,  I'm receiving the following error message:  Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. I am tring to create the seach via REST API. Is there something special that i need to know about API calls? Via the UI -  the search works. Thanks!
Hi as that probably seems to be UUID string, you could make more strict regex to match it like [0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12} Time by time your data could contains some data whic... See more...
Hi as that probably seems to be UUID string, you could make more strict regex to match it like [0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12} Time by time your data could contains some data which could match e.g. [^\"]+ but it's still e.g. UUID. Also those regex use different amount of resources. With only some events this is usually not an issue, but if/when you have e.g. billions of events then even 1ms start to make difference. You could look that e.g. with regex101.com.  This happened quite often e.g. with SSN + bank accounts etc. So look your data and use expression which match best for your data! r. Ismo 
If you want just format those on display but not convert those to string then you should use command fielformat instead of eval inside foreach loop. I'm not sure if bug in fieldformat has already fix... See more...
If you want just format those on display but not convert those to string then you should use command fielformat instead of eval inside foreach loop. I'm not sure if bug in fieldformat has already fixed or not. At least some earlier versions it didn't work correctly in all cases inside foreach loop. If it didn't work then just use additional field name with eval and you could use original for calculations later.
Disclaimer: This is in no way supported and will break Splunk support. Don't use this in a production environment. There are better ways to solve this, I'm just not smart enough to figure them out... See more...
Disclaimer: This is in no way supported and will break Splunk support. Don't use this in a production environment. There are better ways to solve this, I'm just not smart enough to figure them out. This hack breaks at the next update - allways, so extra care should be taken.    think this would be very nice if Splunk could support the desired behavior, maybee as an option or configuration.     There are three places that influences the maintenance window calculation: service_health_metrics_monitor: Original: | mstats latest(alert_level) AS alert_level WHERE `get_itsi_summary_metrics_index` AND `service_level_max_severity_metric_only` by itsi_kpi_id, itsi_service_id, kpi, kpi_importance | lookup kpi_alert_info_lookup alert_level OUTPUT severity_label AS alert_name | `mark_services_in_maintenance` | `reorganize_metrics_healthscore_results` | gethealth | `get_info_time_without_sid` | lookup service_kpi_lookup _key AS itsi_service_id OUTPUT sec_grp AS itsi_team_id | search itsi_team_id=* | fields - alert_severity, color, kpi, kpiid, serviceid, severity_label, severity_value | rename health_score AS service_health_score | eval is_null_alert_value=if(service_health_score="N/A", 1, 0), service_health_score=if(service_health_score="N/A", 0, service_health_score) This could be changed to: Modified: | mstats latest(alert_level) AS alert_level WHERE `get_itsi_summary_metrics_index` AND `service_level_max_severity_metric_only` by itsi_kpi_id, itsi_service_id, kpi, kpi_importance | lookup kpi_alert_info_lookup alert_level OUTPUT severity_label AS alert_name | `mark_services_in_maintenance` | `reorganize_metrics_healthscore_results` | gethealth | `get_info_time_without_sid` | lookup service_kpi_lookup _key AS itsi_service_id OUTPUT sec_grp AS itsi_team_id | fields - alert_severity, color, kpi, kpiid, serviceid, severity_label, severity_value | rename health_score AS service_health_score | `mark_services_in_maintenance` | eval is_null_alert_value=if(service_health_score="N/A", 1, 0), service_health_score=if(service_health_score="N/A", 0, service_health_score), alert_level=if(is_service_in_maintenance=1 AND alert_level>-2,-2,alert_level) I have added an extra call to macro "mark_services_in_maintenance" and expanded the last eval to set  alert_level to maintenance.   service_health_monitor: Original: `get_itsi_summary_index` host=atp-00pshs* `service_level_max_severity_event_only` | stats latest(urgency) AS urgency latest(alert_level) AS alert_level latest(alert_severity) as alert_name latest(service) AS service latest(is_service_in_maintenance) AS is_service_in_maintenance latest(kpi) AS kpi by kpiid, serviceid | lookup service_kpi_lookup _key AS serviceid OUTPUT sec_grp AS itsi_team_id | search itsi_team_id=* | gethealth | `gettime`   Could be changed to: Modified: `get_itsi_summary_index` `service_level_max_severity_event_only` | stats latest(urgency) AS urgency latest(alert_level) AS alert_level latest(alert_severity) as alert_name latest(service) AS service latest(is_service_in_maintenance) AS is_service_in_maintenance latest(kpi) AS kpi by kpiid, serviceid | gethealth | `gettime` | `mark_services_in_maintenance` | eval alert_level=if(is_service_in_maintenance=1 AND alert_level>-2,-2,alert_level), color=if(is_service_in_maintenance=1 AND alert_level=-2,"#5C6773",color), severity_label=if(is_service_in_maintenance=1 AND alert_level=-2,"maintenance",severity_label), alert_severity=if(is_service_in_maintenance=1 AND alert_level=-2,"maintenance",alert_severity)  Again an extra call to macro "mark_services_in_maintenance"  and the eval at the bottom to set the service in maintenance.   Those to will ensure the service appears in maintenance in the "Service Anayser" and "Glasstables" I think it also takes care of "Deep Dives" but they don't appear to turn dark grey. In order to ensure correct calculation we also have to make changes in "gethealth" search command. The Python script that is of interest is located here: "SPLUNK_HOME/etc/apps/SA-ITOA/lib/itsi/searches/compute_health_score.py" Search for "If a dependent service is disabled, its health should not affect other services" You should get som code that look like this: for depends_on in service.get('services_depends_on', []): # If a dependent service is disabled, its health should not affect other services dependent_service_id = depends_on.get('serviceid') dependency_enabled = [ svc.get('enabled', 1) for svc in self.all_services if dependent_service_id == svc.get('_key') ] if len(dependency_enabled) == 1 and dependency_enabled[0] == 0: continue for kpi in depends_on.get('kpis_depending_on', []): # Get urgencies for dependent services What I want is to replicate the behavior from a disbled service for depends_on in service.get('services_depends_on', []): # If a dependent service is disabled, its health should not affect other services dependent_service_id = depends_on.get('serviceid') dependency_enabled = [ svc.get('enabled', 1) for svc in self.all_services if dependent_service_id == svc.get('_key') ] if len(dependency_enabled) == 1 and dependency_enabled[0] == 0: continue # If a dependent service is in maintenance, its health should not affect other services - ATP maintenance_service_id = depends_on.get('serviceid') try: isinstance(self.maintenance_services, list) except: self.maintenance_services = None if self._is_service_currently_in_maintenance(maintenance_service_id): self.logger.info('ATP - is service in maintenance %s', self._is_service_currently_in_maintenance(maintenance_service_id)) continue for kpi in depends_on.get('kpis_depending_on', []): # Get urgencies for dependent services So I added a call to an existing function _is_service_currently_in_maintenance, unfortunately this fails, as the table maintenance_service is un-initialized (that is the try: except: block), now it just a simple check if the service we depend on is in maintenance and if it is, we break out with continue.   Again, this is NOT supported in any way and should not be used in production and will break at the next update. Kind regards
Thanks for your help. Solution is working as expected.