All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Cheers Rick,  The regex I ended up is like this (?:.*)\/(\w*). The one you suggested,(?:.*)/(\w*), didn't work.   thanks Alex
Hi, on a dashboard I have a simple check box element: LastOne_tkn (token name). If the check box is enabled, the LastOne_tkn=TRUE. There is simple small table view, which shows some results. I would... See more...
Hi, on a dashboard I have a simple check box element: LastOne_tkn (token name). If the check box is enabled, the LastOne_tkn=TRUE. There is simple small table view, which shows some results. I would like to run query in that table view based on LastOne_tkn condition. LastOne_tkn=TRUE (dedup activated) index=machinedata | dedup Attr1 | table Attr1, Attr2 LastOne_tkn=otherwise (dedup deactivated) index=machinedata | table Attr1, Attr2 Any idea, please?
Thank you for posting mock data emulation.  Obviously the app developers do not implement self-evident semantics and should be cursed. (Not just for Splunk's sake, but for every other developer's san... See more...
Thank you for posting mock data emulation.  Obviously the app developers do not implement self-evident semantics and should be cursed. (Not just for Splunk's sake, but for every other developer's sanity.)  If you have any influence on developers, demand that they change JSON structure to something like   { "browser_id": "0123456", "browsers": { "fullName": "blahblah", "name": "blahblah", "state": 0, "lastResult": { "success": 1, "failed": 2, "skipped": 3, "total": 4, "totalTime": 5, "netTime": 6, "error": true, "disconnected": true }, "launchId": 7 }, "result": [ { "id": 8, "description": "blahblah", "suite": [ "blahblah", "blahblah" ], "fullName": "blahblah", "success": true, "skipped": true, "time": 9, "log": [ "blahblah", "blahblah" ] } ], "summary": { "success": 10, "failed": 11, "error": true, "disconnected": true, "exitCode": 12 } }   That is, isolate the browser_id field into a unique key for browser, results, and summary.  The structure you shared cannot express any more semantic browser_id than one.  But if for some bizarre reason the browser_id needs to be passed along in results because summary is not associated with browser_id in each event, expressly say so with JSON key, like   { "browsers": { "id": "0123456", "fullName": "blahblah", "name": "blahblah", "state": 0, "lastResult": { "success": 1, "failed": 2, "skipped": 3, "total": 4, "totalTime": 5, "netTime": 6, "error": true, "disconnected": true }, "launchId": 7 }, "result": { "id": "0123456", "output": [ { "id": 8, "description": "blahblah", "suite": [ "blahblah", "blahblah" ], "fullName": "blahblah", "success": true, "skipped": true, "time": 9, "log": [ "blahblah", "blahblah" ] } ] }, "summary": { "success": 10, "failed": 11, "error": true, "disconnected": true, "exitCode": 12 } }   Embedding data in JSON key is the worst use of JSON - or any structured data. (I mean, I recently lamented worse offenders, but imaging embedding data in column name in SQL!  The developer will be cursed by the entire world.) This said, if your developer has a gun over your head, or they are from a third party that you have no control over, you can SANitize their data, i.e., making the structure saner using SPL.  But remember: A bad structure is bad not because a programming language has difficulty.  A bad structure is bad because downstream developers cannot determine the actual semantics without reading their original manual.  Do you have their manual to understand what each structure means?  If not, you are very likely to misrepresent their intention, therefore get the wrong result. Caveat: As we are speaking semantics, I want to point out that your illustration uses the plural "browsers" as key name as well as singular "result" as another key name, yet the value of (plural) "browsers" is not an array, while the value of (singular) "result" is an array.  If this is not the true structure, you have changed semantics your developers intended.  The following may lead to wrong output. Secondly, your illustrated data has level 1 key of "0123456" in browsers, an identical level 1 key of "0123456" in result, a matching level 2 id in browsers "0123456", a different level 2 id in result "8".  I assume that all matching numbers are semantically identical and non-matching numbers are semantically different Here, I will give you SPL to interpret their intention as my first illustration, i.e., a single browser_id applies to the entire event.  I will assume that you have Splunk 9 or above so fromjson works. (This can be solved using spath with a slightly more cumbersome quotation manipulation.) Here is the code to detangle the semantic madness.  This code does not require the first line, fields _raw.  But doing so can help eliminate distractions.   | fields _raw ``` to eliminate unusable fields from bad structure ``` | fromjson _raw | eval browser_id = json_keys(browsers), result_id = json_keys(result) | eval EVERYTHING_BAD = if(browser_id != result_id OR mvcount(browser_id) > 1, "baaaaad", null()) | foreach browser_id mode=json_array [eval browsers = json_delete(json_extract(browsers, <<ITEM>>), "id"), result = json_extract(result, <<ITEM>>)] | spath input=browsers | spath input=result path={} output=result | mvexpand result | spath input=result | spath input=summary | fields - -* result_id browsers result summary   This is  the output based on your mock data; to illustrate result[] array, I added one more mock element. browser_id description disconnected error exitCode failed fullName id lastResult.disconnected lastResult.error lastResult.failed lastResult.netTime lastResult.skipped lastResult.success lastResult.total lastResult.totalTime launchId log{} name skipped stats success suite{} time ["0123456"] blahblah true true 12 11 blahblah blahblah 8 true true 2 6 3 1 4 5 7 blahblah blahblah blahblah true 0 true 10 blahblah blahblah 9 ["0123456"] blahblah 9 true true 12 11 blahblah blahblah9 9 true true 2 6 3 1 4 5 7 blahblah 9a blahblah 9b blahblah true 0 true 10 blahblah9a blahblah9b 11 In the table, "id" is from results[]. This is the emulation of expanded mock data.  Here, I decided to not use format=json because this will preserve the pretty print format, also because Splunk will not show fromjson-style fields automatically. (With real data, fromjson-style fields are not used in 9.x.)   | makeresults | eval _raw =" { \"browsers\": { \"0123456\": { \"id\": \"0123456\", \"fullName\": \"blahblah\", \"name\": \"blahblah\", \"state\": 0, \"lastResult\": { \"success\": 1, \"failed\": 2, \"skipped\": 3, \"total\": 4, \"totalTime\": 5, \"netTime\": 6, \"error\": true, \"disconnected\": true }, \"launchId\": 7 } }, \"result\": { \"0123456\": [ { \"id\": 8, \"description\": \"blahblah\", \"suite\": [ \"blahblah\", \"blahblah\" ], \"fullName\": \"blahblah\", \"success\": true, \"skipped\": true, \"time\": 9, \"log\": [ \"blahblah\", \"blahblah\" ] }, { \"id\": 9, \"description\": \"blahblah 9\", \"suite\": [ \"blahblah9a\", \"blahblah9b\" ], \"fullName\": \"blahblah9\", \"success\": true, \"skipped\": true, \"time\": 11, \"log\": [ \"blahblah 9a\", \"blahblah 9b\" ] } ] }, \"summary\": { \"success\": 10, \"failed\": 11, \"error\": true, \"disconnected\": true, \"exitCode\": 12 } } " | spath ``` the above partially emulates index="github_runners" sourcetype="testing" source="reports-tests" ```    
This will do the trick: | mstats avg(cpu_metric.*) as cpu_* WHERE index=<your_metrics_index> by CPU, host | table CPU, host | eventstats max(CPU) as cpu_count by host | table cpu_count, host | e... See more...
This will do the trick: | mstats avg(cpu_metric.*) as cpu_* WHERE index=<your_metrics_index> by CPU, host | table CPU, host | eventstats max(CPU) as cpu_count by host | table cpu_count, host | eval cpu_count=cpu_count+   the data being used is from the add on Link to the splunk add on for Splunk Add-on for Unix and Linux docs 
Hi @SumitSharma , this app isn't certified for Splunk Cloud. In addition ths app doesn't seem to be free (I could be wrong about this!). anyway, you should consider this app as a custom app and mo... See more...
Hi @SumitSharma , this app isn't certified for Splunk Cloud. In addition ths app doesn't seem to be free (I could be wrong about this!). anyway, you should consider this app as a custom app and modify it to remove the part containing scripts that probably will block the upoad in Splunk Cloud. The app ins't accessible so I cannot be more detaied. Ciao. Giuseppe
Hi @Ram2, In the code you shared there are some missing parts. the, these aren't few hosts so I hint to use a lookup containing two columns: env host like the folowing: env host DEV amptams.d... See more...
Hi @Ram2, In the code you shared there are some missing parts. the, these aren't few hosts so I hint to use a lookup containing two columns: env host like the folowing: env host DEV amptams.dev.com DEV ampvitss.dev.com DEV ampdoctrc.dev.com SIT ampastdmsg.dev.com SIT ampmorce.dev.com SIT ampsmls.dev.com UAT ampserv.dev.com UAT ampasoomsg.dev.com SYS ampmsdser.dev.com SYS ampastcol.dev.com (remember to create also the Lookup Definition). in this way you could use in cascade two dropdown lists in this way: <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="env"> <label>Environment</label> <choice value="*">All</choice> <prefix>env="</prefix> <suffix>"</suffix> <default>*</default> <fieldForLabel>env</fieldForLabel> <fieldForValue>env</fieldForValue> <search> <query> | inputlookup perimeter.csv | dedup env | sort env | table env </query> </search> </input> <input type="dropdown" token="host"> <label>Server</label> <choice value="*">All</choice> <prefix>host="</prefix> <suffix>"</suffix> <default>*</default> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> | inputlookup perimeter.csv WHERE $env$ | dedup host | sort host | table host </query> </search> </input> </fieldset> <row> <panel> <table> <title>Incoming Count &amp; Total Count</title> <search> <query> index=app-index source=application.logs $env$ $host$ ( "Initial message received with below details" OR "Letter published correctley to ATM subject" OR "Letter published correctley to DMM subject" OR "Letter rejected due to: DOUBLE_KEY" OR "Letter rejected due to: UNVALID_LOG" OR "Letter rejected due to: UNVALID_DATA_APP" ) | rex field= _raw "application :\s(?<Application>\w+)" | rex field= _raw "(?<Msgs>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" | chart count over Application by Msgs | rename "Initial message received with below details" AS Income, "Letter published correctley to ATM subject" AS ATM, "Letter published correctley to DMM subject" AS DMM, "Letter rejected due to: DOUBLE_KEY" AS Reject, "Letter rejected due to: UNVALID_LOG" AS Rej_log, "Letter rejected due to: UNVALID_DATA_APP" AS Rej_app | table Income Rej_app ATM DMM Reject Rej_log Rej_app </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>  Ciao. Giuseppe
Thanks for your suggestion. I read your the link you provided. So, I can't outputlookup data to KVStore without building KVcollection first, correct? Should I create transform.conf and collectio... See more...
Thanks for your suggestion. I read your the link you provided. So, I can't outputlookup data to KVStore without building KVcollection first, correct? Should I create transform.conf and collection.conf? I don't have admin right.   search data | outputlookup kv_store_lookup   https://docs.splunk.com/Documentation/Splunk/9.2.1/Knowledge/ConfigureKVstorelookups https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Outputlookup
Do you get all 0 from this?   | makeresults format=csv data="Income, Rej_app, ATM, DMM, Reject, Rej_log< Rej_app ,,,,, ,,,,, ,,,,," | fillnull   This is what I get ATM DMM Income Rej_app ... See more...
Do you get all 0 from this?   | makeresults format=csv data="Income, Rej_app, ATM, DMM, Reject, Rej_log< Rej_app ,,,,, ,,,,, ,,,,," | fillnull   This is what I get ATM DMM Income Rej_app Rej_log< Rej_app Reject 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
Hi @ITWhisperer  The output is correct I want any one the result in my output.either  “file put successfully ” or “inbound file processed”.but it showing both right.so that I want to dedup.
Hello, We are trying to configure the authentication extensions for the Okta identity provider and below are the steps as per the Splunk documentation.Log into Splunk Platform as an administrator lev... See more...
Hello, We are trying to configure the authentication extensions for the Okta identity provider and below are the steps as per the Splunk documentation.Log into Splunk Platform as an administrator level user. From the system bar, click Settings > Authentication Methods. Click "Configure Splunk to use SAML". The "SAML configuration" dialog box appears. In the Script path field within the Authentication Extensions section of the "SAML configuration" dialog box , type in SAML_script_okta.py. In the Script timeout field, type in 300s. In the Get User Info time-to-live field, type in 3600s. Click the Script functions field. In the pop-up window that appears, click getUserInfo. Under Script Secure Arguments, click Add Input. In the Key field, type in apiKey. In the Value field, type in the API key for your IdP. Click "Add input" again. In the "Key" field, type in baseUrl. in the "Value" field, type in the URL of your Okta instance. Click Save. Splunk Cloud Platform saves the Okta configuration and returns you to the SAML Groups page.   Could anyone confirm whether these steps will work for the Splunk OnPrem too? or it is applicable for the Splunk Cloud?    Also, as per Step (In the Value field, type in the API key for your IdP.), we have to provide the API key for the Idp, our security team is asking what permission does the Okta API token needs? any thoughts? Please advice.    Thank you!    
How did you apply fillnull?   |fillnull value=0 Do you mean to say that the following doesn't give you 0 when the value is null? yes
We have a dashboard, where we want to add few hosts in a drop down.  I tried using single host in a drop down its working, but when we add multiple hosts it showing syntax error(invalid attribute.) ... See more...
We have a dashboard, where we want to add few hosts in a drop down.  I tried using single host in a drop down its working, but when we add multiple hosts it showing syntax error(invalid attribute.) DEV amptams.dev.com ampvitss.dev.com ampdoctrc.dev.com SIT ampastdmsg.dev.com ampmorce.dev.com ampsmls.dev.com UAT ampserv.dev.com ampasoomsg.dev.com SYS ampmsdser.dev.com ampastcol.dev.com   Dashboard xml       <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="Server"> <label>Env wise hosts</label> <choice value="amptams.dev.com">ENVINORMENT-DEV</choice> <choice value="ampastdmsg.dev.com">ENVINORMENT-SIT</choice> <choice value="ampserv.dev.com">ENVINORMENT-UAT</choice> <choice value="ampmsdser.dev.com">ENVINORMENT-SYS</choice>> </fieldset> <row> <panel> <table> <title>Incoming Count &amp; Total Count</title> <search> <query>index=app-index source=application.logs $Server$ |rex field= _raw "application :\s(?<Application>\w+)" | rex field= _raw "(?<Msgs>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" |chart count over Application by Msgs |rename "Initial message received with below details" as Income, "Letter published correctley to ATM subject" as ATM, "Letter published correctley to DMM subject" as DMM, "Letter rejected due to: DOUBLE_KEY" as Reject, "Letter rejected due to: UNVALID_LOG" as Rej_log, "Letter rejected due to: UNVALID_DATA_APP" as Rej_app |table Income Rej_app ATM DMM Reject Rej_log Rej_app </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>        
How did you apply fillnull?  Do you mean to say that the following doesn't give you 0 when the value is null? index=app-index source=application.logs |rex field= _raw "application :\s(?<Application... See more...
How did you apply fillnull?  Do you mean to say that the following doesn't give you 0 when the value is null? index=app-index source=application.logs |rex field= _raw "application :\s(?<Application>\w+)" | rex field= _raw "(?<Msgs>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" |chart count over Application by Msgs |rename "Initial message received with below details" as Income, "Letter published correctley to ATM subject" as ATM, "Letter published correctley to DMM subject" as DMM, "Letter rejected due to: DOUBLE_KEY" as Reject, "Letter rejected due to: UNVALID_LOG" as Rej_log, "Letter rejected due to: UNVALID_DATA_APP" as Rej_app |table Income Rej_app ATM DMM Reject Rej_log Rej_app |fillnull Income Rej_app ATM DMM Reject Rej_log Rej_app  
First, on thought process.  Splunk allows you to create additional field in event stream.  If you mark each day as "day -1", "day -2", etc., you can group earliest and latest by day. This is how to ... See more...
First, on thought process.  Splunk allows you to create additional field in event stream.  If you mark each day as "day -1", "day -2", etc., you can group earliest and latest by day. This is how to do that in Splunk   index=*XYZ" "Batchname1" earliest=-7d@d latest=-0d@d | eval dayback = mvrange(0, 7) | eval day = mvmap(dayback, if(_time < relative_time(now(), "-" . dayback . "d@day") AND relative_time(now(), "-" . tostring(dayback + 1) . "d@day") < _time, dayback, null())) | stats min(_time) as Earliest max(_time) as Latest by day | fieldformat Earliest = strftime(Earliest, "%F %T") | fieldformat Latest = strftime(Latest, "%F %T") | eval day = "day -" . tostring(day + 1)   The output looks like day Earliest Latest day -1 2024-04-23 00:01:00 2024-04-23 23:53:00 day -2 2024-04-22 09:29:00 2024-04-22 23:31:00 day -3 2024-04-21 14:29:00 2024-04-21 14:29:00 day -4 2024-04-20 00:01:00 2024-04-20 19:14:00 day -5 2024-04-19 01:13:00 2024-04-19 23:47:00 day -6 2024-04-18 00:01:00 2024-04-18 23:28:00 day -7 2024-04-17 00:01:00 2024-04-17 23:14:00 Two pointers: It doesn't seem to make sense to search in current day.  So I shifted search period to -0day@day.  If your requirement includes current day, you need to change latest as well as tweak the definition of day a little. Do not use earliest(_time); min(_time) is cheaper. The following is the emulation I use to test the above.   index = _audit earliest=-7d@d latest=-0d@d action=validate_token | timechart span=1m count | where count > 0 ``` emulation of index=*XYZ" "Batchname1" earliest=-7d@d latest=-0d@d ```    
Please find the query and sample logs, Issue is when there are no logs with any of the  Msgs, that coloumns are showing null, tried fill null command but not working. index=app-index source=applicat... See more...
Please find the query and sample logs, Issue is when there are no logs with any of the  Msgs, that coloumns are showing null, tried fill null command but not working. index=app-index source=application.logs |rex field= _raw "application :\s(?<Application>\w+)" | rex field= _raw "(?<Msgs>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" |chart count over Application by Msgs |rename "Initial message received with below details" as Income, "Letter published correctley to ATM subject" as ATM, "Letter published correctley to DMM subject" as DMM, "Letter rejected due to: DOUBLE_KEY" as Reject, "Letter rejected due to: UNVALID_LOG" as Rej_log, "Letter rejected due to: UNVALID_DATA_APP" as Rej_app |table Income Rej_app ATM DMM Reject Rej_log Rej_app   Sample logs: 2024-01-24 11:21:55,123 [app-product-network-thread | payments_acoount_history_app_hjutr_12nj567fghj5667_product] INFO STREAM_APPLICATION - Timestamp:2024-01-24 11:21:55,123 Initial message received with below details: Application:Login Code name: payments_acoount_history_app_hjutr_12nj567fghj5667_product Code offset: -12 Code partition: 4 2024-01-24 11:21:55,123 [app-product-network-thread | payments_acoount_history_app_hjutr_12nj567fghj5667_product] INFO STREAM_APPLICATION - Timestamp:2024-01-24 11:21:55,123 Letter published correctley to ATM subject: Application:Success Code name: payments_acoount_history_app_hjutr_12nj567fghj5667_product Code offset: -1 Code partition: 10 2024-01-24 11:21:55,123 [app-product-network-thread | payments_acoount_history_app_hjutr_12nj567fghj5667_product] INFO STREAM_APPLICATION - Timestamp:2024-01-24 11:21:55,123 Letter published correctley to DMM subject: Application:normal-state Code name: payments_acoount_history_app_hjutr_12nj567fghj5667_product Code offset: -1 Code partition: 6   2024-01-24 11:21:55,123 [app-product-network-thread | payments_acoount_history_app_hjutr_12nj567fghj5667_product] INFO STREAM_APPLICATION - Timestamp:2024-01-24 11:21:55,123 Letter rejected due to: DOUBLE_KEY: Application:error-state Code name: payments_acoount_history_app_hjutr_12nj567fghj5667_product Code offset: -1 Code partition: 4   2024-01-24 11:21:55,123 [app-product-network-thread | payments_acoount_history_app_hjutr_12nj567fghj5667_product] INFO STREAM_APPLICATION - Timestamp:2024-01-24 11:21:55,123 Letter rejected due to: UNVALID_LOG: Application:Debug Code name: payments_acoount_history_app_hjutr_12nj567fghj5667_product Code offset: -18 Code partition: 2   2024-01-24 11:21:55,123 [app-product-network-thread | payments_acoount_history_app_hjutr_12nj567fghj5667_product] INFO STREAM_APPLICATION - Timestamp:2024-01-24 11:21:55,123 Letter rejected due to: UNVALID_DATA_APP: Application:logout Code name: payments_acoount_history_app_hjutr_12nj567fghj5667_product Code offset: -4 Code partition: 0
Hi, Our application uses log4j2 logging framework. We are trying to send log signals created by Otel Logs SDK to Splunk cloud platform. Instead of fileReceiver, we want to send these over HTTP. We... See more...
Hi, Our application uses log4j2 logging framework. We are trying to send log signals created by Otel Logs SDK to Splunk cloud platform. Instead of fileReceiver, we want to send these over HTTP. We are using HTTP Event Collector to send the log records to Splunk Cloud. Our configuration for HEC exporter in OTEL Collector is: exporter: splunk_hec/logs: token: "<token>" endpoint: "https://<host>:8088/services/collector/raw" source: "otel" index: "logs" disable_compression: false tls: insecure_skip_verify: true service: pipelines: logs: receivers: [ otlp ] processors: [ batch] exporters: [ splunk_hec/logs] We do see the events being received at Splunk Cloud Platform, but we are not able to query the log data itself. Can someone guide if this is correct way ? or guide to correct resource. Thanks!
Your regex seems pretty OK. You could try to simplify it a bit (the character class is not needed if you want just one character, slashes don't need escaping and {1,} can be replaced by +)  so you co... See more...
Your regex seems pretty OK. You could try to simplify it a bit (the character class is not needed if you want just one character, slashes don't need escaping and {1,} can be replaced by +)  so you could do something like this: (?:/[^/]*)+/(\w*) But you can simplify it even further (?:.*)/(\w*) You could take one thing into account though - a valid hostname can contain a dash which is not included in \w. Also, depending on your environment, if it's a FQDN, it can contain dots.  
Hi. We just upgraded from 9.0.6 to 9.1.4 and are seeing these same warnings. Do we know that this was fixed in 9.1.4?
I have changed my appserver/static/javascript directory and the setup page that reffers to it does not update. I tried to uninstall the add-on, restart Splunk server, but it does not change... plea... See more...
I have changed my appserver/static/javascript directory and the setup page that reffers to it does not update. I tried to uninstall the add-on, restart Splunk server, but it does not change... please help me figure out what do I miss...  this is my setup page dashboard (./javascript/setup_page.js is the file I changed without any effects <dashboard isDashboard="false" version="1.1"            script="./javascript/setup_page.js"            stylesheet="./styles/setup_page.css"            hideEdit="true"            hideAppBar="true" >       <label>Setup Page</label>     <row>         <panel>             <html>                 <div id="main_container"></div>             </html>         </panel>     </row> </dashboard>
Don't you mean | rename licenseGB as GB