Hi, A bit of a strange one that I can't workout. I have a deployer server and a search head in one DC and 2 searchheads in another DC. They are all part of the searchhead cluster and all share the same configs. My problem is that the searchhead app has been deployed to all the searchheads. The 2 searchheads located in the same DC have the app and correct configs, but don't perform any field extractions. Interestingly, if I open an event and select extract field, the parser sees the fields? The searchhead on its own performs as expected. I can see no errors. Running btool confirms the file is also correct.
It's the first time I've ever come across this.
TIA
Steve
Hi, It looks like some weird file permission issue. Although everything in /opt/splunk is readable by splunk, it still couldn't see props.conf. Which of course makes no sense when the working searchhead has the exact same perms and works without issue. On one of the affected search heads, I did a chmod -R 755 on /opt/splunk/etc/apps/* and it started working immediately on both affected servers??? Very strange.
I'll mark this as resolved 🙂
Thanks
Steve
Hi @cdstealer , could you please share your props? Also, can you check using below query on SH that your extractions shows up ?
| rest splunk_server=local servicesNS/nobody/<your-app_name>/configs/conf-props/
| rex field=id "servicesNS\/nobody\/(?<app>.+?)\/"
| where app="<your-app_name>"
| fields - eai:* SEGMENTATION*
| fields id,title, sourcetype, updated, disabled *, FIELDALIAS-*
| search title=<sourcetype name>
| transpose 1 column_name=setting
| rename "row 1" as value
| search value != ""
| eval value=case(value=1,"true", value=0,"false",1=1,value)
| eval stanzas = setting +" = "+value
| stats LIST(stanzas) as Stanza
Hi Nisha, Thanks for the query.. this is the output on both a working and non-working searchhead.
id = https://127.0.0.1:8089/servicesNS/nobody/app_name/configs/conf-props/sourcetype
title = sourcetype
updated = 1970-01-01T01:00:00+01:00
disabled = false
ADD_EXTRA_TIME_FIELDS = true
ANNOTATE_PUNCT = true
AUTO_KV_JSON = true
BREAK_ONLY_BEFORE_DATE = true
CHARSET = UTF-8
DATETIME_CONFIG = /etc/datetime.xml
DEPTH_LIMIT = 1000
EVAL-duration = hour * 3600 + minute * 60 + second + precise/10000000
EXTRACT-fields = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3} \[\d+\] \[(?P<ClientIP>[^ ]*) \(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\)\] INFO\s+(?P<GingerClass>[^ ]*) - Start: (?P<JobStartTime>[^;]*);Reseller:\s(?P<ResellerName>[^;]*);ResellerUnit: (?P<ResellerUnit>[^;]*);(?P<JobGUID>[^;]*);(?P<UserGUID>[^;]*)?;(?P<ApiMethod>[^;]*);Duration: (?P<hour>[^:]\d+):(?P<minute>[^:]\d+):(?P<second>[^\.]\d+)\.(?P<precise>[^;]\d+);(?P<RegOutcome>[^;]*)?;(?P<FailReason>[^$]*)?
LEARN_MODEL = true
LEARN_SOURCETYPE = true
LINE_BREAKER_LOOKBEHIND = 100
MATCH_LIMIT = 100000
MAX_DAYS_AGO = 2000
MAX_DAYS_HENCE = 2
MAX_DIFF_SECS_AGO = 3600
MAX_DIFF_SECS_HENCE = 604800
MAX_EVENTS = 256
MAX_TIMESTAMP_LOOKAHEAD = 128
SHOULD_LINEMERGE = true
TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N
TIME_PREFIX = ^
TRUNCATE = 10000
app = app_name
author = nobody
detect_trailing_nulls = false
maxDist = 100
splunk_server = searchhead1
The actual props.conf is:
[sourcetype]
TIME_PREFIX = ^
TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N
EXTRACT-fields = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3} \[\d+\] \[(?P<ClientIP>[^ ]*) \(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\)\] INFO\s+(?P<GingerClass>[^ ]*) - Start: (?P<JobStartTime>[^;]*);Reseller:\s(?P<ResellerName>[^;]*);ResellerUnit: (?P<ResellerUnit>[^;]*);(?P<JobGUID>[^;]*);(?P<UserGUID>[^;]*)?;(?P<ApiMethod>[^;]*);Duration: (?P<hour>[^:]\d+):(?P<minute>[^:]\d+):(?P<second>[^\.]\d+)\.(?P<precise>[^;]\d+);(?P<RegOutcome>[^;]*)?;(?P<FailReason>[^$]*)?
EVAL-duration = hour * 3600 + minute * 60 + second + precise/10000000
OK, I fixed the field extractions by converting to indextime rather than searchtime. However, any searchtime functions eg calculated fields are still being ignored by the 2 searchheads. I'll keep digging.
I propose that you should create ticket to Splunk support, if you haven't done it yet.
But I'm really interested to hear what was the issue when you have fixed it 😉
Will do 🙂
Hi, It looks like some weird file permission issue. Although everything in /opt/splunk is readable by splunk, it still couldn't see props.conf. Which of course makes no sense when the working searchhead has the exact same perms and works without issue. On one of the affected search heads, I did a chmod -R 755 on /opt/splunk/etc/apps/* and it started working immediately on both affected servers??? Very strange.
I'll mark this as resolved 🙂
Thanks
Steve
Hi Soutamo, We don't run a multisite due to its scale and function. So I have the deployer and a search head in one location and then 2 search heads in the other location. These are then setup behind a load balancer which will fail to the single searchhead should the primary location fail.