All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am receiving "splunkd experiencing s problem" in ES. It says it might automatically improve or worsen. Thank u
Hello, I have the following SPL command :   |tstats count where index=main host IN (H1,H2) by host, _time span=1h | predict "count" as prediction algorithm=LLP holdback=168 future_timespan=240 per... See more...
Hello, I have the following SPL command :   |tstats count where index=main host IN (H1,H2) by host, _time span=1h | predict "count" as prediction algorithm=LLP holdback=168 future_timespan=240 period=168 upper90=upper90 lower90=lower90 | `forecastviz(240, 168, "count", 90)` |   It makes predictions about the count of a host It outputs an array of the type :   host | _time | count | lower90(prediction) | prediction | upper90(prediction) H1 | 2021-07-10 00:00 | 6170671 | 2494994.26372 | 6170671.0 | 9846347.73628 H1 | 2021-07-10 01:00 | 6231397 | 2456899.6988 | 6231397.0 | 10005894.3012 . . . H2 | 2021-07-10 05:00 | 5216984 | 1722288.55477 | 5216984.0 | 8711679.44523 H2 | 2021-07-10 06:00 | 5297360 | 1979214.14187 | 5297360.0 | 8615505.85813 . . .   I would like to calculate linear regressions for each host in this table with the MLTK macro   `regressionstatistics("count", prediction)`   to outpuy an array of type :   host | rSquared | RMSE H1 | 0.8042 | 1195199.83 H2 | 0.7842 | 1126684.87   I can't do it. Could you help me? Thanks in advance Sincerely M.
I have two different datacenter . hostA and hostB are like datacenters and 1,2,3.... are hosts. hostA-1, hostA-2, hostA-3, hostA-4, hostA-5 . hostB-5, hostB-6, hostB-7, hostB-8.  and wanted to check ... See more...
I have two different datacenter . hostA and hostB are like datacenters and 1,2,3.... are hosts. hostA-1, hostA-2, hostA-3, hostA-4, hostA-5 . hostB-5, hostB-6, hostB-7, hostB-8.  and wanted to check side by side to those datacenters and only get the token value that matches. here is the sample log: 2021-08-05 19:01:59.677 INFO RestTemplate: {"logType":"STANDARD","message":"==========================request log================================================", "Method":"POST","Headers":"{Accept=[application/json], Content-Type=[application/json], Authorization=[Bearer eyJhQM8DMG8bEtCIsiZ0GjyYWxwt3ny1Q], Token=[basd23123], "Request body": {"accountNumber":824534875389475}}} hostA = 1 source = a.log sourcetype = a_log 2021-08-05 19:01:59.687 INFO RestTemplate: {"logType":"STANDARD","message":"==========================request log================================================", "Method":"POST","Headers":"{Accept=[application/json], Content-Type=[application/json], Authorization=[Bearer eyJhQM8DMG8bEtCIsiZ0GjyYWxwt3ny1Q], Token=[basd23123], "Request body": {"accountNumber":824534875389475}}} hostb = 6 source = a.log sourcetype = a_log if the Authorization matches on both hostA and hostB then only the matched are needed.  eg  hostA                                hostB                                              result asd132c                          asd132c                                     matched
Hi all  :), I am trying to use Splunk rest API using postman. when I try to make a request on port 8089 I am getting a "COULD NOT GET ANY RESPONSE". the url is the right one and the host is listen... See more...
Hi all  :), I am trying to use Splunk rest API using postman. when I try to make a request on port 8089 I am getting a "COULD NOT GET ANY RESPONSE". the url is the right one and the host is listening to the port  I cant get a response using curl and python either the web UI is working properly (port 8000) and sending data using HEC too (port 8089) 
Hello team, I want to forward Opentelemetry collector logs to Splunk. I'm not referring to sending application logs to Splunk using Splunk HEC exporter. When you have logging exporter configured on y... See more...
Hello team, I want to forward Opentelemetry collector logs to Splunk. I'm not referring to sending application logs to Splunk using Splunk HEC exporter. When you have logging exporter configured on your collector, your terminal shows collector logs such as   2021-08-09T12:55:28.110Z info healthcheck/handler.go:128 Health Check state change {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check", "status": "ready"} 2021-08-09T12:55:28.110Z info service/service.go:267 Everything is ready. Begin running and processing data.   when the collector is ready or   2021-08-09T12:55:33.511Z info exporterhelper/queued_retry.go:276 Exporting failed. Will retry the request after interval. {"component_kind": "exporter", "component_type": "otlp", "component_name": "otlp", "error": "failed to push log data via OTLP exporter: rpc error: code = DeadlineExceeded desc = context deadline exceeded", "interval": "4.121551979s"}   when the collector fails to export data. I want to send these collector logs to Splunk. Is there a way to configure the Splunk HEC exporter to capture these collector logs and send it to Splunk?
We have 3 clustered indexers and an original Search Head. Installed an app that has a custom props.conf on the Search Head, and it is NOT showing the extracted proper fields when performing searches.... See more...
We have 3 clustered indexers and an original Search Head. Installed an app that has a custom props.conf on the Search Head, and it is NOT showing the extracted proper fields when performing searches. Deployed a new Search Head and installed the same exact app. The new Search Head shows the proper fields. The two servers appear to be identical, and running: splunk cmd btool props list --debug ...shows the same exact results, line by line, for the app. The original server does have some extra apps, but with the results of the btool above, it would not appear there are any conflicts with other apps. What would be the next steps in troubleshooting the original Search Head, and why it does not show the proper fields?
How can i extract this: "properties": {"nextLink": null, "columns": [ {"name": "Cost", "type": "Number"}, {"name": "Date", "type": "Number"}, {"name": "Charge", "type": "String"}, {"name": "Pub... See more...
How can i extract this: "properties": {"nextLink": null, "columns": [ {"name": "Cost", "type": "Number"}, {"name": "Date", "type": "Number"}, {"name": "Charge", "type": "String"}, {"name": "Publisher", "type": "String"}, {"name": "Resource", "type": "String"}, {"name": "Resource", "type": "String"}, {"name": "Service", "type": "String"}, {"name": "Standard", "type": "String"}, "rows": [ [2.06, 20210807, "usage", "uuuu", "hhh", "gd", "bandwidth", "azy", "HHH"], [2.206, 20210807, "usage", "uuuhhh", "ggg", "gd", "bandwidth", "new", "YYY"] ] No of columns can be increased.    
Hello, After upgrading to Splunk 8 from Splunk 6, it seems that the "show_source" view  ( used in "Event actions" -> "Show source" ) isn't wrapping the long lines as it used to do before. We've... See more...
Hello, After upgrading to Splunk 8 from Splunk 6, it seems that the "show_source" view  ( used in "Event actions" -> "Show source" ) isn't wrapping the long lines as it used to do before. We've isolated the new setting in jsx code in: /opt/splunk/share/splunk/search_mrsparkle/exposed/js/views/show_source/index.jsx   tableRow: { border: 0, margin: 0, padding: 0, whiteSpace: 'nowrap', },​ Before the setting was set to 'white-space: pre-wrap' as we can see in our backups of search_mrsparkle from splunk6. Is there any way for us to go the same behavior as before so that the long lines in "Show source" are wrapped ?  
Hi guys,  Currently building my own lab in docker where each instance is mapped to a different host port using -P with docker run Whenever I attempt to set anything in < targetUri => within m... See more...
Hi guys,  Currently building my own lab in docker where each instance is mapped to a different host port using -P with docker run Whenever I attempt to set anything in < targetUri => within my Deployment Client , the DC never ever phones home. Variations of my DC.conf: Deployment server Container name solely:       [deployment-client] phoneHomeIntervalInSecs = 60 disabled = false [target-broker:deploymentServer] targetUri = DeploymentServer         Deployment server ContaierName w/mgmt port:        [deployment-client] phoneHomeIntervalInSecs = 60 disabled = false [target-broker:deploymentServer] targetUri = DeploymentServer:8089     Deployment Server Container IP w/mgmt port         [deployment-client] phoneHomeIntervalInSecs = 60 disabled = false [target-broker:deploymentServer] targetUri = 172.19.0.2:8089       IP grabbed with       docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' DeploymentServer       Other info:  I have an app in deployment-apps on my DS Internal logs from my indexer is as follows from my UF1 when initiating with IP and port I can ping the IPs when on each machine  I turned off windows firewall to ensure it wasn't being blocked, nothing changed Even when setting the DS within -e on the UF container it does not work Internal logs from uf01: - Phonehome thread started - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected - Handshake done   docker ps output:  IDX1 4c5998526d1b splunk/splunk:latest "/sbin/entrypoint.sh…" 3 hours ago Up About an hour (healthy) 0.0.0.0:49169->8000/tcp, :::49169->8000/tcp, 0.0.0.0:49168->8065/tcp, :::49168->8065/tcp, 0.0.0.0:49167->8088/tcp, :::49167->8088/tcp, 0.0.0.0:49166->8089/tcp, :::49166->8089/tcp, 0.0.0.0:49165->8191/tcp, :::49165->8191/tcp, 0.0.0.0:49164->9887/tcp, :::49164->9887/tcp, 0.0.0.0:49163->9997/tcp, :::49163->9997/tcp idx1 DS 843454a553ec splunk/splunk:latest "/sbin/entrypoint.sh…" 3 hours ago Up About an hour (healthy) 0.0.0.0:49159->8000/tcp, :::49159->8000/tcp, 0.0.0.0:49158->8065/tcp, :::49158->8065/tcp, 0.0.0.0:49157->8088/tcp, :::49157->8088/tcp, 0.0.0.0:49156->8089/tcp, :::49156->8089/tcp, 0.0.0.0:49155->8191/tcp, :::49155->8191/tcp, 0.0.0.0:49154->9887/tcp, :::49154->9887/tcp, 0.0.0.0:49153->9997/tcp, :::49153->9997/tcp DeploymentServer UF01 923c0bc20fb3 splunk/universalforwarder:latest "/sbin/entrypoint.sh…" 3 hours ago Up About an hour (healthy) 0.0.0.0:49162->8088/tcp, :::49162->8088/tcp, 0.0.0.0:49161->8089/tcp, :::49161->8089/tcp, 0.0.0.0:49160->9997/tcp, :::49160->9997/tcp its definitely confusing. Anyone able to help me out? 
From where can I download Splunk 6.6.2 (build 4b804538c686). I can see from the portal that the oldest I can download is version 7.1.1 Need this for specific testing purposes
I have JSON file around 6 GB Can I upload this file to specific Index instead of send it with POST object by object?
I have a dashboard with several different base searches that is transformative searches. However I get the error of maximum amount of concurrent historical searches.  Unfortunately we can't upgrad... See more...
I have a dashboard with several different base searches that is transformative searches. However I get the error of maximum amount of concurrent historical searches.  Unfortunately we can't upgrade the cpu count or change the role. I was thinking is there a way of making it so that there is a queue set for them so for example when the first 3 are finished the next 3 searches starts. Or set the order of when the searches start and finish?
ERROR [stories_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (stories) (0000T7P5) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:01m:31s:258... See more...
ERROR [stories_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (stories) (0000T7P5) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:01m:31s:258ms. There were errors during the synchronization!
Hello dear community, I am new to splunk and I wanted to monitor my splunk architecture via ITSI and the corresponding content pack. It seems to work, but every node which has "OS" in it, does not se... See more...
Hello dear community, I am new to splunk and I wanted to monitor my splunk architecture via ITSI and the corresponding content pack. It seems to work, but every node which has "OS" in it, does not seem to work. All the others do.   Here is a link of the doc. the bottom nodes which are containing "*OS*" are not working. About the Content Pack for Monitoring Splunk as a Service - Splunk Documentation   Any idea for a beginner? Would be so greateful. Thanks, best regards and ty in advance. Mario
When I try to push to search head from deployer using command     /opt/splunk/bin/splunk apply shcluster-bundle -target      it gives the error "Error, Parameters must be in the form '-parameter valu... See more...
When I try to push to search head from deployer using command     /opt/splunk/bin/splunk apply shcluster-bundle -target      it gives the error "Error, Parameters must be in the form '-parameter value' " .  Is there another way to achieve this? 
Hi  I have installed new app from splunkbase on splunkcloud trial platform. After successful installation app is showing continuous loading message. When checked using splunk search = _internal for ... See more...
Hi  I have installed new app from splunkbase on splunkcloud trial platform. After successful installation app is showing continuous loading message. When checked using splunk search = _internal for error following details are shown  Can anyone guide me on how to proceed with resolving this issue? https://splunkbase.splunk.com/app/4486/   splunk fresh install and accessing app shows loading message. 08-07-2021 11:40:08.253 +0000 ERROR AdminManagerExternal [5376 TcpChannelThread] - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 154, in init\n hand = handler(mode, ctxInfo)\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 67, in __init__\n get_splunkd_uri(),\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/solnlib/splunkenv.py", line 210, in get_splunkd_uri\n scheme, host, port = get_splunkd_access_info()\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/solnlib/splunkenv.py", line 182, in get_splunkd_access_info\n 'server', 'sslConfig', 'enableSplunkdSSL')):\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/solnlib/splunkenv.py", line 230, in get_conf_key_value\n stanzas = get_conf_stanzas(conf_name)\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/solnlib/splunkenv.py", line 284, in get_conf_stanzas\n out[section] = {item[0]: item[1] for item in parser.items(section)}\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 870, in items\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 870, in <listcomp>\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 867, in <lambda>\n section, option, d[option], d)\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 387, in before_get\n self._interpolate_some(parser, option, L, value, section, defaults, 1)\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 437, in _interpolate_some\n "found: %r" % (rest,))\nbackports.configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%'\n   Thanks &Regards Raghavendra  
Can anyone tell me the steps to deploy and configure multiple apps in a cluster with heavy forwarders. 
when I type this command in git bash /opt/splunk/bin/splunk apply shcluster-bundle -target   to get cluster status I keep getting the error    "Error, Parameters must be in the form '-parameter value... See more...
when I type this command in git bash /opt/splunk/bin/splunk apply shcluster-bundle -target   to get cluster status I keep getting the error    "Error, Parameters must be in the form '-parameter value'". Can someone tell how to resolve this
Hi,   I want to delete my free Splunk account. I'm not going to call. How do I do this online? Thanks
Hi, I have several datasets that have the exact same format with only the source of the data differing. I've duplicated my macros from the dev environment to the test but I'm receiving no results fo... See more...
Hi, I have several datasets that have the exact same format with only the source of the data differing. I've duplicated my macros from the dev environment to the test but I'm receiving no results for the test macro, despite when I do the tstats search as a datamodel instead I get results. Here is the dev search:  tstats summariesonly=true count("dev_metric.exchangeId") as avg_TPS from datamodel=metric by _time, dev_metric.date span=1s | search "dev_metric.date"=$date$ | stats avg(avg_TPS) as averageTps by dev_metric.date | eval averageTps=round(averageTps/1000,3) | appendpipe [tstats count | where count=0] | fillnull value=0.000 averageTps | fields averageTps   Here is the test search: tstats summariesonly=true count("test_metric.exchangeId") as avg_TPS from datamodel=metric by _time, test_metric.date span=1s | search "test_metric.date"=$date$ | stats avg(avg_TPS) as averageTps by metric.date | eval averageTps=round(averageTps/1000,3) | appendpipe [tstats count | where count=0] | fillnull value=0.000 averageTps | fields averageTps I've checked the dataset and there are the needed events in there, and I've done a | datamodel search equivalent to tstats that worked fine. What could be the reason I'm receiving no results? And what could be some steps to resolve this?