All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi I tried with fieldset in the form ..but its still fetch result based on first dropdown and runs the result Current Behavior: The dashboard fetches results immediately when the "env" dropdow... See more...
Hi I tried with fieldset in the form ..but its still fetch result based on first dropdown and runs the result Current Behavior: The dashboard fetches results immediately when the "env" dropdown is selected (e.g., "test" or "prod"). Results are fetched without considering other filters like "data entity" or "time."  Expected behaviour The dashboard should wait for the user to: Select a value from the "env" dropdown (e.g., "test" or "prod"). Select a value from the "data entity" dropdown. Specify a time range. Only after all selections are made and the "Submit" button is clicked, the query should execute and fetch results. Could someone help on this .I tried adding fieldset     <form version="1.1" theme="dark"> <label>Metrics222</label> <fieldset> <input type="dropdown" token="indexToken1" searchWhenChanged="false"> <label>Environment</label> <choice value="prod-,prod,*">PROD</choice> <choice value="np-,test,*">TEST</choice> <change> <eval token="stageToken">mvindex(split($value$,","),1)</eval> <eval token="indexToken">mvindex(split($value$,","),0)</eval> </change> </input> <input type="dropdown" token="entityToken" searchWhenChanged="false"> <label>Data Entity</label> <choice value="*">ALL</choice> </input> <input type="time" token="timeToken" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <html id="APIStats"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">API USAGE STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique User / Unique Client</title> <search> <query>index=$indexToken$ AND source="/aws/lambda/g-lambda-au-$stageToken$" | stats dc(claims.sub) as "Unique Users", dc(claims.cid) as "Unique Clients" BY claims.cid claims.groups{} | rename claims.cid AS app, claims.groups{} AS groups | table app "Unique Users" "Unique Clients" groups</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <html id="nspCounts"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">NSP STREAM STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique Consumer</title> <search> <query>index="np" source="**" | spath path=$stageToken$.nsp3s{} output=nsp3s | sort -_time | head 1 | mvexpand nsp3s | spath input=nsp3s path=Name output=Name | spath input=nsp3s path=DistinctAdminUserCount output=DistinctAdminUserCount | search Name="*costing*" | table Name, DistinctAdminUserCount</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> <panel> <table> <title>Event Processed</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "success Published to NSP3 objectType*" | rex field=msg "objectType\s*:\s*(?&lt;objectType&gt;[^\s]+)" | stats count by objectType</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <table> <title>Number of Errors</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "error*" | stats count</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>API : Data/Search Count</title> <html id="errorcount5"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user"> API COUNT STATISTICS</h2> </html> </panel> </row> <row> <panel> <title>Total Request Data</title> <table> <search> <query>(index=$indexToken$ source="/aws/lambda/api-data-$stageToken$-$entityToken$" OR source="/aws/lambda/api-commands-$stageToken$-*") ge:*:init:*:invoke | spath path=event.path output=path | spath path=event.httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Request Search</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") ge:init:*:invoke | spath path=path output=path | spath path=httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Error Count :</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") msg="error*" (error.status=4* OR error.status=5*) | eval status=case(like(error.status, "4%"), "4xx", like(error.status, "5%"), "5xx") | stats count by error.status</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Response Time Count in ms</title> <table> <search>rliest&gt;<query>index=np-papi source IN ("/aws/lambda/api-search-test-*") "ge:init:search:response" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="Search API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime | append [ search index=np-papi source IN ("/aws/lambda/api-data-test-*") msg="ge:init:data:*" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="DATA API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime ] | table API, TotalResponseTime, AvgResponseTime</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <html id="errorcount16"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">Request per min</h2> </html> </panel> </row> <row> <panel> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-data-$stageToken$-$entityToken$","/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:*:*" | timechart span=1m count by source | untable _time source count | stats sum(count) as TotalCount, avg(count) as AvgCountPerMin by source | eval AvgCountPerMin = round(AvgCountPerMin, 2) | eval source = if(match(source, "api-data-test-(.*)"), replace(source, "/api-data-test-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-data-prod-(.*)"), replace(source, "/aws/lambda/api-data-prod-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda/api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambdaapi-search-prod-(.*)", "search-\\1")))) | table source, TotalCount, AvgCountPerMin</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>SLA % :DATA API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambdaapi-data-$stageToken$-$entityToken$") "ge:init:data:responseTime" | eval SLA_threshold = 113 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "DATA API" | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>SLA % :SEARCH API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:search:response:time" | eval SLA_threshold = 100 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "SEARCH API" | eval source = if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda\api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambda/api-search-prod-(.*)", "search-\\1")) | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>      
Thank you for your feedback. I appreciate the time and expertise you and others volunteer here. My intention wasn’t to exclude anyone.I’ll keep your advice in mind for future posts to ensure all cont... See more...
Thank you for your feedback. I appreciate the time and expertise you and others volunteer here. My intention wasn’t to exclude anyone.I’ll keep your advice in mind for future posts to ensure all contributions are valued equally.  And will repost without tagging anyone
@PickleRick https://splunkbase.splunk.com/app/4310 This is the app I have installed will It cause problem whiile pushing from DS to HF?
The fields command is a distributable streaming command because it *can* run on indexers.  That does not mean it cannot run on search heads. It's also possible you are confusing "search-time" field ... See more...
The fields command is a distributable streaming command because it *can* run on indexers.  That does not mean it cannot run on search heads. It's also possible you are confusing "search-time" field extraction with something that only occurs on a search head.  Indexers also perform search-time field extraction.
As a rule of thumb, an add-on should work whether it's deployed locally or distributed from DS. There are some possible issues (some of which you can tackle successfully) coming from two possible iss... See more...
As a rule of thumb, an add-on should work whether it's deployed locally or distributed from DS. There are some possible issues (some of which you can tackle successfully) coming from two possible issues: 1) When deployed from the DS, an app is being pushed as a whole. So - differently from deployer where you can configure push mode - you're deploying whole app overwriting local changes 2) There can be problems with pushing secrets with the app. If an app is made according to Splunk practices, the secrets should be either in plain text to be encrypted on first use or should be encrypted with the destination HF's secret (which raises issues when you want to distribute the same app across multiple HFs). If the app does secret storing its own way... well, you're on your own.
Please do not tag me - I, like many here, volunteer my time and expertise and it is not for others to suggest what I work on. By specifically addressing people, you are also potentially excluding oth... See more...
Please do not tag me - I, like many here, volunteer my time and expertise and it is not for others to suggest what I work on. By specifically addressing people, you are also potentially excluding others who may have valuable contributions to make; it is like you don't value or are not interested in their efforts (since you haven't also directly addressed them). I imagine this can be counter-productive to resolving your issue (whatever that might be)!
We are installing modular input (akamai add-on) to get akamai logs to Splunk. In our environment, we have kept modular input in DS under deployment apps and pushed it to HF using serverclass. Is thi... See more...
We are installing modular input (akamai add-on) to get akamai logs to Splunk. In our environment, we have kept modular input in DS under deployment apps and pushed it to HF using serverclass. Is this the issue? Do modular inputs directly needs to be installed on HF rather than pushing from DS?  Because we are configuring data input on HF (which is pushed frm DS) and when saving it is throwing 404 error action forbidden. When we directly install it on HF it is getting saved perfectly. Almost in our environment all apps will be pushed from DS to CM and DS to Deployer even though they are not modular inputs contains just configs but as of now worked good. Is it a bad practice in case of modular input?  Please guide me      
No, unfortunately not. There is a conversion path from Classic to Studio, but even that is not perfect and many advanced features of Classic have no direct equivalents in Studio.
Hi  I have a dasahboard below code currently it gets result whenever the env dropdown is selected either test or prod.My use case is once select env from drop down+select data entity+time ...then ... See more...
Hi  I have a dasahboard below code currently it gets result whenever the env dropdown is selected either test or prod.My use case is once select env from drop down+select data entity+time ...then when i hit submit button.It should fetch resu;t based on all these selection done <form version="1.1" theme="dark"> <label>Metrics222</label> <fieldset> <input type="dropdown" token="indexToken1" searchWhenChanged="false"> <label>Environment</label> <choice value="prod-,prod,*">PROD</choice> <choice value="np-,test,*">TEST</choice> <change> <eval token="stageToken">mvindex(split($value$,","),1)</eval> <eval token="indexToken">mvindex(split($value$,","),0)</eval> </change> </input> <input type="dropdown" token="entityToken" searchWhenChanged="false"> <label>Data Entity</label> <choice value="*">ALL</choice> </input> <input type="time" token="timeToken" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <html id="APIStats"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">API USAGE STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique User / Unique Client</title> <search> <query>index=$indexToken$ AND source="/aws/lambda/g-lambda-au-$stageToken$" | stats dc(claims.sub) as "Unique Users", dc(claims.cid) as "Unique Clients" BY claims.cid claims.groups{} | rename claims.cid AS app, claims.groups{} AS groups | table app "Unique Users" "Unique Clients" groups</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <html id="nspCounts"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">NSP STREAM STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique Consumer</title> <search> <query>index="np" source="**" | spath path=$stageToken$.nsp3s{} output=nsp3s | sort -_time | head 1 | mvexpand nsp3s | spath input=nsp3s path=Name output=Name | spath input=nsp3s path=DistinctAdminUserCount output=DistinctAdminUserCount | search Name="*costing*" | table Name, DistinctAdminUserCount</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> <panel> <table> <title>Event Processed</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "success Published to NSP3 objectType*" | rex field=msg "objectType\s*:\s*(?&lt;objectType&gt;[^\s]+)" | stats count by objectType</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <table> <title>Number of Errors</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "error*" | stats count</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>API : Data/Search Count</title> <html id="errorcount5"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user"> API COUNT STATISTICS</h2> </html> </panel> </row> <row> <panel> <title>Total Request Data</title> <table> <search> <query>(index=$indexToken$ source="/aws/lambda/api-data-$stageToken$-$entityToken$" OR source="/aws/lambda/api-commands-$stageToken$-*") ge:*:init:*:invoke | spath path=event.path output=path | spath path=event.httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Request Search</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") ge:init:*:invoke | spath path=path output=path | spath path=httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Error Count :</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") msg="error*" (error.status=4* OR error.status=5*) | eval status=case(like(error.status, "4%"), "4xx", like(error.status, "5%"), "5xx") | stats count by error.status</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Response Time Count in ms</title> <table> <search>rliest&gt;<query>index=np-papi source IN ("/aws/lambda/api-search-test-*") "ge:init:search:response" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="Search API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime | append [ search index=np-papi source IN ("/aws/lambda/api-data-test-*") msg="ge:init:data:*" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="DATA API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime ] | table API, TotalResponseTime, AvgResponseTime</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <html id="errorcount16"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">Request per min</h2> </html> </panel> </row> <row> <panel> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-data-$stageToken$-$entityToken$","/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:*:*" | timechart span=1m count by source | untable _time source count | stats sum(count) as TotalCount, avg(count) as AvgCountPerMin by source | eval AvgCountPerMin = round(AvgCountPerMin, 2) | eval source = if(match(source, "api-data-test-(.*)"), replace(source, "/api-data-test-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-data-prod-(.*)"), replace(source, "/aws/lambda/api-data-prod-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda/api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambdaapi-search-prod-(.*)", "search-\\1")))) | table source, TotalCount, AvgCountPerMin</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>SLA % :DATA API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambdaapi-data-$stageToken$-$entityToken$") "ge:init:data:responseTime" | eval SLA_threshold = 113 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "DATA API" | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>SLA % :SEARCH API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:search:response:time" | eval SLA_threshold = 100 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "SEARCH API" | eval source = if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda\api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambda/api-search-prod-(.*)", "search-\\1")) | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>  
Hi @livehybrid, thank you so much for your feedback, they were very helpful in getting my Dashboard working in Splunk, thank you Regards DTapia
@Karthikeya  The error shows TA-Akamai_SIEM modular input is failing with HTTP 404 -- Action forbidden. This likely means the API endpoint is incorrect or access is denied due to invalid credential... See more...
@Karthikeya  The error shows TA-Akamai_SIEM modular input is failing with HTTP 404 -- Action forbidden. This likely means the API endpoint is incorrect or access is denied due to invalid credentials or permissions. Check HF network access to Akamai:  curl -i https://<akamai-api-endpoint> Replace <akamai-api-endpoint> with the exact API URL you're using. You should NOT get a 404 or 403 if the endpoint and credentials are correct. Contact Akamai support to confirm that: Make sure all required fields (API URL, credentials, etc.) are correctly filled. The API credentials (tokens) are still active and have permission to fetch SIEM logs. The specific endpoint being used is correct (Akamai has multiple regions and base URLs).
@livehybrid And about the first approach, after updating the values in the .conf files, I still have to restart the splunkd service on the server, right? Or restarting splunk service on the DS is an ... See more...
@livehybrid And about the first approach, after updating the values in the .conf files, I still have to restart the splunkd service on the server, right? Or restarting splunk service on the DS is an alternative way?
True, in this case it's universal forwarders sending the logs, that we don't manage, which is the only reason why they suggested deploying a TA at the indexer level.  They might want the custom sourc... See more...
True, in this case it's universal forwarders sending the logs, that we don't manage, which is the only reason why they suggested deploying a TA at the indexer level.  They might want the custom sourcetype changed to a standardized one. Hmm also thank you, since I haven't seen anything for the splunk admin course to suggest it goes through that level and either does the udemy course.  I might retake the udemy course as a refresher then schedule the splunkcloud course since there's been talk of us migrating to it so the splunkcloud course would be more practical. thanks
Hello GCusello,  thank you very much for the support, it worked very well Regards DTapia
@StephenD1  The FIELD_NAMES and FIELD_DELIMITER attributes only apply when INDEXED_EXTRACTIONS is set. Please have a look https://community.splunk.com/t5/Getting-Data-In/Error-Bug-during-applyPendi... See more...
@StephenD1  The FIELD_NAMES and FIELD_DELIMITER attributes only apply when INDEXED_EXTRACTIONS is set. Please have a look https://community.splunk.com/t5/Getting-Data-In/Error-Bug-during-applyPendingMetadata-header-processor-does-not/m-p/239170  The key here is that you are using INDEXED_EXTRACTIONS. Sourcetypes that use INDEXED_EXTRACTIONS need to have their props.conf on the universal forwarder. There is a good explanation as to the "why" here:  https://community.splunk.com/t5/Getting-Data-In/Why-is-the-sourcetype-specified-in-inputs-conf-on-the-universal/td-p/110285?_gl=1*11h7fp1*_gcl_au*MTY2NzExOTYxOS4xNzQxOTY4Mjky*FPAU*MTY2NzExOTYxOS4xNzQxOTY4Mjky*_ga*MTExNTc3MzM4LjE3NDE5NjgyOTM.*_ga_5EPM2P39FV*MTc0MzIxMzIwOC4xNi4xLjE3NDMyMTUwNjUuMC4wLjE1MTQ4MzIwNzM.*_fplc*U01xUU5CVXpnQmdOems5T1I3YTNFQ3RIODdFRnllRmhWUU5JcXd6bmw5bkh4JTJGS2lKcG9jRm1CbnNSSW80bVBuTmdvdVVhS2RZJTJGeWxCWkJOVk5MMGVMWkhKeDJNQlduOHpXUXFNaHZ4aDMlMkJNOWVhNDRsWGxRbHRGeEMydmNnJTNEJTNE INDEXED_EXTRACTIONS is a somewhat special processor that is usually done on universal forwarders to ingest structured data. This is done in the parsing queue. The slides for the 2015 conf session are here https://conf.splunk.com/session/2015/conf2015_ABath_JKerai_Splunk_SplunkClassics_HowSplunkdWorks.pdf   FIELD_DELIMITER = <character> * Which character delimits or separates fields in the specified file or source. * You can use the delimiters for structured data header extraction with this setting. * This setting supports the use of the special characters described above. * The default can vary if 'INDEXED_EXTRACTIONS' is set. * Default (if 'INDEXED_EXTRACTIONS' is not set): not set   FIELD_NAMES = [ <string>,..., <string>] * Some CSV and structured files might have missing headers. * This setting tells Splunk software to specify the header field names directly. * The default can vary if 'INDEXED_EXTRACTIONS' is set. * Default (if 'INDEXED_EXTRACTIONS' is not set): not set    
Hi usually you don’t put TAs on indexers in distributed environment if you have HFs in use. You must put those on HFs an SH layers. This is usually said on TAs installation instructions. Usually th... See more...
Hi usually you don’t put TAs on indexers in distributed environment if you have HFs in use. You must put those on HFs an SH layers. This is usually said on TAs installation instructions. Usually the only part to put into indexers via CM is index definitions. I haven’t take those Udemy’s courses so I cannot said anything about those.  If I recall right in splunk admin course neither goes through these at this level too? I don’t know if it has changed or not? But just read what we have written here and ask more questions about those issues, you will get that information. r. Ismo r. Ismo
IMHO I never use any IPs when I configure splunk infra nodes (I made this mistake once). My primary way is use native DNS service where I put/update node names like xx-IDX-a-1 with fqdn. Another optio... See more...
IMHO I never use any IPs when I configure splunk infra nodes (I made this mistake once). My primary way is use native DNS service where I put/update node names like xx-IDX-a-1 with fqdn. Another option is use static CNAMEs and last option is use hosts file on nodes. In that way you could do most admin operations w/o any service breaks.
I've noticed an issue with one of my syslog indexes. I have a syslog server centralizing and forwarding syslogs for 6 different indexes. Not too long ago, I modified one of the indexes to extract fie... See more...
I've noticed an issue with one of my syslog indexes. I have a syslog server centralizing and forwarding syslogs for 6 different indexes. Not too long ago, I modified one of the indexes to extract fields at the UF instead of the indexer (this solved another problem that is not relevant here; I can provide detail if it becomes relevant). I noticed that occasionally, that index that is extracting fieldnames at the UF stops sending while the others are sending. The only thing that reliably gets it sending again is restarting the Splunk service on the UF. I'm newish to Splunk so I'm sure I am not troubleshooting all the things I should be. The one thing I noticed is right before the index stops sending, I see errors in the _internal host index  ERROR TailReader [2264207 tailreader0] - Ignoring path="<path/to/log/syslog.log>" due to: Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.   Based on some research here I think I've discovered the problem but I need confirmation before I start making changes. I added the following fields for the sourcetypes:  Forwarder: props.conf: [sourcetype:here] FIELD_DELIMITER = whitespace FIELD_NAMES = field1,field2,field3,etc...   I think the problem is I did not specify  ... INDEXED_EXTRACTIONS = W3C ...   So my question is, do I need the INDEXED_EXTRACTIONS parameter if I use the FIELD_DELIMITER & FIELD_NAMES or can those be used without it? I believe this is what is missing and causing Splunk to periodically stop processing the file. If I do not need it, then I would need to search for a different cause. Thanks in advance for your help.
Thanks @ITWhisperer  for your quick answer. I'm a bit new to Splunk, so I started straight away with dashboard studio as it's easier. So if there's a work around within dashboard studio, that would b... See more...
Thanks @ITWhisperer  for your quick answer. I'm a bit new to Splunk, so I started straight away with dashboard studio as it's easier. So if there's a work around within dashboard studio, that would be perfect. Otherwise I'll start also looking at classic dashboard and classic SimpleXML. Is it possible to switch in both directions between classic dashboard (xml) and dashboard studio (json) ?
You probably have more scope for doing this sort of thing in Classic SimpleXML dashboards than Studio! (Sorry if that wasn't very helpful.)