All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Folks, Has anyone had success with using iframes in Splunk Enterprise 8.x yet? I have tested in multiple 8.0.1 environments and the panel fails to load, while the same code is working on 7.0.0 a... See more...
Hi Folks, Has anyone had success with using iframes in Splunk Enterprise 8.x yet? I have tested in multiple 8.0.1 environments and the panel fails to load, while the same code is working on 7.0.0 and 7.3.1 environments. Nothing I have seen from older posts is working yet. Here is the simple XML to try: <panel> <title>COVID test for iFrame compatibility</title> <html> <center> <iframe src="https://covid-19.splunkforgood.com/coronavirus__covid_19_" width="100%" height="800px"/> </center> </html> </panel>
Hello Guys! I need to change the values that are present in the field "Item Codigo" . For example: 040500603S007C10 to Product 01 010300404S014C01 to Product 02 I had searching the... See more...
Hello Guys! I need to change the values that are present in the field "Item Codigo" . For example: 040500603S007C10 to Product 01 010300404S014C01 to Product 02 I had searching the method, i tried use eval, but no success...
Does the publisher intend to release a version of this app that is compatible with Splunk 8.x? The Add-On as it exists now does not pass validation/upgrade preparedness. Please advise.
We have a custom app that reports as a "Blocker" due to: Check 6: Splunk web legacy mode. If you upgrade Splunk Enterprise 8.0 without addressing this check, this app may break. Requi... See more...
We have a custom app that reports as a "Blocker" due to: Check 6: Splunk web legacy mode. If you upgrade Splunk Enterprise 8.0 without addressing this check, this app may break. Required Action:Switch from legacy mode to normal mode when I look at the web.conf, it is blank.... Or at least: # [settings] # startwebserver = 0 Should I place some values here (e.g. appServerPorts=8065) so that the app readiness tool will evaluate -clear? As in, if I attempt to install over this will the installer break?
Trying to connect to Oracle OCI database. Followed the instructions in the "Connect Splunk DB Connect to Oracle Wallet environments using ojdbc8" troubleshooting guide, however, the drivers won't lo... See more...
Trying to connect to Oracle OCI database. Followed the instructions in the "Connect Splunk DB Connect to Oracle Wallet environments using ojdbc8" troubleshooting guide, however, the drivers won't load. Get the following warning message: ARN com.splunk.dbx.service.driver.DriverServiceImpl - action=load_drivers Can not load any driver from files [/opt/splunk/etc/apps/splunk_app_db_connect/drivers/ojdbc8-libs/ons.jar]
After installing the app I am unable to configure it. Neither the "Input" or "Configuration" panels will load - they simply clock. The one message i am able to find is: "Unable to initialize modu... See more...
After installing the app I am unable to configure it. Neither the "Input" or "Configuration" panels will load - they simply clock. The one message i am able to find is: "Unable to initialize modular input "geoipupdate" defined inside the app "SecKit_SA_geolocation": Introspecting scheme=geoipupdate: script running failed (exited with code 1)." Have tried this on an ES and Search Head instance with the same results.
Edit 2: After upgrading the app, 05-29-2020 09:30:33.788 -0500 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\... See more...
Edit 2: After upgrading the app, 05-29-2020 09:30:33.788 -0500 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\splunktaucclib\rest_handler\handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\splunktaucclib\rest_handler\handler.py", line 179, in all\n **query\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\solnlib\packages\splunklib\binding.py", line 289, in wrapper\n return request_fun(self, *args, **kwargs)\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\solnlib\packages\splunklib\binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\solnlib\packages\splunklib\binding.py", line 679, in get\n response = self.http.get(path, all_headers, **query)\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\solnlib\packages\splunklib\binding.py", line 1183, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\solnlib\packages\splunklib\binding.py", line 1244, in request\n raise HTTPError(response)\nsolnlib.packages.splunklib.binding.HTTPError: HTTP 404 Not Found -- {"messages":[{"type":"ERROR","text":"Not Found"}]}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\admin.py", line 148, in init\n hand.execute(info)\n File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\admin.py", line 634, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\splunk_aoblib\rest_migration.py", line 39, in handleList\n AdminExternalHandler.handleList(self, confInfo)\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\splunktaucclib\rest_handler\admin_external.py", line 40, in wrapper\n for entity in result:\n File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\splunktaucclib\rest_handler\handler.py", line 122, in wrapper\n raise RestError(exc.status, str(exc))\nsplunktaucclib.rest_handler.error.RestError: REST Error [404]: Not Found -- HTTP 404 Not Found -- {"messages":[{"type":"ERROR","text":"Not Found"}]}\n 05-29-2020 09:30:33.788 -0500 ERROR AdminManagerExternal - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [404]: Not Found -- HTTP 404 Not Found -- {"messages":[{"type":"ERROR","text":"Not Found"}]}". See splunkd.log for more details. Edit: Further detail from my 7.3.3 instance: Object { type: "ERROR", text: "Unexpected error \" 'splunktaucclib.rest_handler.error.RestError'>\" from python handler: \"REST Error [404]: Not Found -- HTTP 404 Not Found -- {\"messages\":[{\"type\":\"ERROR\",\"text\":\"Not Found\"}]}\". See splunkd.log for more details." } On latest version of app, Splunk 8.0.2.1. When watching the browser's network tab, I see this return a 500 error: https://splunk.company.org/en-US/splunkd/__raw/servicesNS/nobody/TA-MS-AAD/TA_MS_AAD_azure_event_hub?output_mode=json&count=0&sort_dir=asc&sort_key=name&search=&offset=0&_=1586198941992 Can't find any logs relevant to the app. This app doesn't have 8.x.x listed as compatible, but experienced similar issues on 7.2.x and 7.3.x.
Both queries work on our non ES server; however, only the first query works on our ES server. This query works in both places: index=myIndexFile | head (1==1) | lookup myserverlist my_host ... See more...
Both queries work on our non ES server; however, only the first query works on our ES server. This query works in both places: index=myIndexFile | head (1==1) | lookup myserverlist my_host This query throws the following error on our ES server: Streamed search execute failed because: "[IndexServerName] Error in 'lookup' command: Could not construct lookup 'myserverlist , my_host'. See search.log for more details.." I've looked at the search.log file and found nothing useful. index=myIndexFile | lookup myserverlist my_host
I want to replace the message for a custom message in HTML so I can let my users know what they need to do (many of them has problems with english so I want to replace the message for it in their lan... See more...
I want to replace the message for a custom message in HTML so I can let my users know what they need to do (many of them has problems with english so I want to replace the message for it in their language) So far I've made a workaround with this: <panel> <input type="dropdown" token="run" searchWhenChanged="true"> <label>Run ?</label> <choice value="null">No</choice> <choice value="index=myindex">Yes</choice> <default>null</default> </input> <chart depends="$InvResults$"> <search> <done> <condition match="$job.resultCount$==0"> <unset token="InvResults"></unset> </condition> <condition> <set token="InvResults">true</set> </condition> </done> <query>$run$ -- searchstring--</query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.chart">line</option> <option name="refresh.display">progressbar</option> </chart> <html rejects="$InvResults$"> <div style="font-weight:bold;text-align:center;font-size:200%;color:#FF3636">Favor selecionar pelo menos 1 dos valores dos tokens acima"</div> </html> So as you can see I can manage the "No results found" but while it is "wating for input" I still couldn't find a way around it.
I have a line chart that plots results for a bunch of tests. One of the tests is a "baseline" result. Each result includes a value that indicates the baseline to compare with. I currently have a... See more...
I have a line chart that plots results for a bunch of tests. One of the tests is a "baseline" result. Each result includes a value that indicates the baseline to compare with. I currently have a query that looks something like: <search base="First_Base_Search"> <query>| stats perc50("Variables.Xmetrics.totalCpuUtilizationSeconds") as "50th Percentile" by "Variables.deviceBuild"</query> </search> How can I modify the query to plot the baseline result? There is a variable called: "Variables.baselineBuild", so I can search for the baseline result and get its Variables.Xmetrics.totalCpuUtilizationSeconds. I just don't know SPL well enough to wrap my head around how I can do this secondary query and then reference it when drawing the chart UI element.
I am struggling with the order of operations in my timechart query. I need to show the number of Users who accessed a system daily over a 7 day period. My query shows the correct numbers for 1 day, b... See more...
I am struggling with the order of operations in my timechart query. I need to show the number of Users who accessed a system daily over a 7 day period. My query shows the correct numbers for 1 day, but when I extend the timepicker to 7 days the numbers are incorrect. I've tried using dedup to get the distinct number of users, but this causes a problem when I extend the timepicker (it then dedupes users across 7 days instead of per day). Help. index=foo sourcetype="bar" realm="keywords" | stats dc(User) by _time, status | timechart span=1d count by status
index= xxxxxx sourcetype=xxxxxx | eval import_time=strftime(_time, "%Y-%m-%d:%H") | eval import_timeday=strftime(_time, "%Y-%m-%d") | eventstats latest(import_time) as Last by import_timeday ... See more...
index= xxxxxx sourcetype=xxxxxx | eval import_time=strftime(_time, "%Y-%m-%d:%H") | eval import_timeday=strftime(_time, "%Y-%m-%d") | eventstats latest(import_time) as Last by import_timeday | where Last = import_time | timechart count by Product with this search the output seems to be hourly instead of daily Can some one help in sending SPL to see results daily Thanks in advance
Hi, I upgraded Splunk DB Connect from V3.2.0 to V3.3.0 and now see lots of errors like this. Anyone know what this is, or what I can do about it? (Rolled back to V3.2.0 for now). (Splunk version ... See more...
Hi, I upgraded Splunk DB Connect from V3.2.0 to V3.3.0 and now see lots of errors like this. Anyone know what this is, or what I can do about it? (Rolled back to V3.2.0 for now). (Splunk version 8.0.1) Thanks, Paul 04-06-2020 17:41:42.379 +0000 ERROR ExecProcessor - message from "/opt/apps/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 17:41:42.378 [metrics-logger-reporter-1-thread-1] INFO c.s.d.c.h.i.ConnectionPoolMetricsLogReporter - type=TIMER, name=unnamed_pool_82649329_jdbc_mysql//pie1applicationdatabases-agenacluster-.cluster-ro-.us-west-2.rds.amazonaws.com3306/agena_pie1?useSSL_true.pool.Wait, count=18, min=0.011064, max=21.322425, mean=5.69591799641126, stddev=8.099832751566446, p50=0.011469, p75=17.237046, p95=17.237046, p98=17.237046, p99=17.237046, p999=17.237046, m1_rate=9.818194454345773E-23, m5_rate=7.220013689967951E-7, m15_rate=1.3879932839497408E-4, mean_rate=2.525958496499233E-4, rate_unit=events/second, duration_unit=milliseconds
I am writing a query which is going to a scheduled report. I have 3 servers/hosts (serv1, serv2, serv3) whose average response time i am calculating like this, timechart span=1d eval(round(avg(req... See more...
I am writing a query which is going to a scheduled report. I have 3 servers/hosts (serv1, serv2, serv3) whose average response time i am calculating like this, timechart span=1d eval(round(avg(req_time_seconds),2)) as avgresponse_time by host I am looking for an output that should do a comparison of serv1 with other two and give me a result like below, Serv1's avg_resp_time is 20 % higher than Serv2 Serv1's avg_resp_time is 10 % higher than Serv3 something like this.. I don't want an absolute value but a percent value and how much it is higher than Serv1.
I tried to deploy the Splunk Enterprise Security Sandbox and it doesn't seem to have deployed correctly. When I try to re-deploy it gives me the error "Each customer is limited to one active Splunk E... See more...
I tried to deploy the Splunk Enterprise Security Sandbox and it doesn't seem to have deployed correctly. When I try to re-deploy it gives me the error "Each customer is limited to one active Splunk Enterprise Security Online Sandbox." When I go to my instances, I don't have any instances active. Is there any way to reset this and try again?
We use Deployment Server for managing all our universal forwarder inputs. I need to take an accounting of all devices, from the deployment server, where the Universal Forwarder has been installed (... See more...
We use Deployment Server for managing all our universal forwarder inputs. I need to take an accounting of all devices, from the deployment server, where the Universal Forwarder has been installed (and phones home), but has NOT been configured (no inputs forwarding). Already using the UFMA app, but this particular search isn't jumping out at me. Anyone got a one-liner for this? Thank you.
Hello, I want to create an app which should show all the app as home page for admins. I have like 15 apps which should be in one single app where all other apps need to be dsiplayed. Please check ... See more...
Hello, I want to create an app which should show all the app as home page for admins. I have like 15 apps which should be in one single app where all other apps need to be dsiplayed. Please check the image below.
I'm looking to investigate IP addresses with highest peak loads on our service. Here's my current query: application="my-app" index="my-index" request client_ip="*" user_agent="*" request="*" kub... See more...
I'm looking to investigate IP addresses with highest peak loads on our service. Here's my current query: application="my-app" index="my-index" request client_ip="*" user_agent="*" request="*" kube_pod="web-*" | timechart limit=10 useother=f span="5minute" count by client_ip It works, and it's almost what I'm looking for. The documentation states: If a single aggregation is specified, the score is based on the sum of the values in the aggregation for that split-by value. For example, for timechart avg(foo) BY <field> the avg(foo) values are added up for each value of to determine the scores. If I understand this correctly, timeseries is picking the top 10 series whose sum of count s over the time span are the greatest. That is to say, it's picking the 10 top series by greatest integral. Instead, I want to select the 10 top series with the highest peak values (of any time in the timespan). For example, if there's a single data point that shows 10,000 requests per second for a single second, I want to be chosen over another series that shows 10 requests/second for months straight (whose integral would be much greater, but with a much lower max peak). Could you please help me do that?
Hi All, need help in getting a regex code for the below message. 2020-04-04T15:08:01+00:00 usdaldc <44> %WAAS-HTTPAO-4-131001: (843570) worker pool isn't healthy 2020-04-04T15:08:01+00:00 usdal... See more...
Hi All, need help in getting a regex code for the below message. 2020-04-04T15:08:01+00:00 usdaldc <44> %WAAS-HTTPAO-4-131001: (843570) worker pool isn't healthy 2020-04-04T15:08:01+00:00 usdaldc <43> %WAAS-HTTPAO-3-131003: (843509) AOSHELL worker thread (28814 0.0) stuck for 650000 msec: start 0x7feedd6aa880(/cisco/lib64/libaoshell.so+0x50880), callback 0x4a6140(/sw/unicorn/bin/http_ao64+0xa6140) wanted data format: in tabular wrt to the above alarm Device Alarm Message usdaldc WAAS-HTTPAO worker pool isn't healthy usdaldc WAAS-HTTPAO AOSHELL worker thread please help me with the code.
I have put together a python Splunk Modular Input that depends on python3 to execute and works just fine if I have python3 as the default server level python version to use (via server.conf - pyth... See more...
I have put together a python Splunk Modular Input that depends on python3 to execute and works just fine if I have python3 as the default server level python version to use (via server.conf - python.version = python3 ). I'm distributing this to various Splunk users and not all of them have this enabled yet. When I try and install this on one of those instances, the introspection fails with syntax errors (Python3 specific syntax) and I can not get the modular input to initialize. Is there a way to tell the introspection which python to use when initializing the modular input (within the script itself)? I have tried adding the above stanza and setting to defaults/inputs.conf with no luck. Thanks