All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a dashboard with several different base searches that is transformative searches. However I get the error of maximum amount of concurrent historical searches.  Unfortunately we can't upgrad... See more...
I have a dashboard with several different base searches that is transformative searches. However I get the error of maximum amount of concurrent historical searches.  Unfortunately we can't upgrade the cpu count or change the role. I was thinking is there a way of making it so that there is a queue set for them so for example when the first 3 are finished the next 3 searches starts. Or set the order of when the searches start and finish?
ERROR [stories_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (stories) (0000T7P5) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:01m:31s:258... See more...
ERROR [stories_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (stories) (0000T7P5) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:01m:31s:258ms. There were errors during the synchronization!
Hello dear community, I am new to splunk and I wanted to monitor my splunk architecture via ITSI and the corresponding content pack. It seems to work, but every node which has "OS" in it, does not se... See more...
Hello dear community, I am new to splunk and I wanted to monitor my splunk architecture via ITSI and the corresponding content pack. It seems to work, but every node which has "OS" in it, does not seem to work. All the others do.   Here is a link of the doc. the bottom nodes which are containing "*OS*" are not working. About the Content Pack for Monitoring Splunk as a Service - Splunk Documentation   Any idea for a beginner? Would be so greateful. Thanks, best regards and ty in advance. Mario
When I try to push to search head from deployer using command     /opt/splunk/bin/splunk apply shcluster-bundle -target      it gives the error "Error, Parameters must be in the form '-parameter valu... See more...
When I try to push to search head from deployer using command     /opt/splunk/bin/splunk apply shcluster-bundle -target      it gives the error "Error, Parameters must be in the form '-parameter value' " .  Is there another way to achieve this? 
Hi  I have installed new app from splunkbase on splunkcloud trial platform. After successful installation app is showing continuous loading message. When checked using splunk search = _internal for ... See more...
Hi  I have installed new app from splunkbase on splunkcloud trial platform. After successful installation app is showing continuous loading message. When checked using splunk search = _internal for error following details are shown  Can anyone guide me on how to proceed with resolving this issue? https://splunkbase.splunk.com/app/4486/   splunk fresh install and accessing app shows loading message. 08-07-2021 11:40:08.253 +0000 ERROR AdminManagerExternal [5376 TcpChannelThread] - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 154, in init\n hand = handler(mode, ctxInfo)\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 67, in __init__\n get_splunkd_uri(),\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/solnlib/splunkenv.py", line 210, in get_splunkd_uri\n scheme, host, port = get_splunkd_access_info()\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/solnlib/splunkenv.py", line 182, in get_splunkd_access_info\n 'server', 'sslConfig', 'enableSplunkdSSL')):\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/solnlib/splunkenv.py", line 230, in get_conf_key_value\n stanzas = get_conf_stanzas(conf_name)\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/solnlib/splunkenv.py", line 284, in get_conf_stanzas\n out[section] = {item[0]: item[1] for item in parser.items(section)}\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 870, in items\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 870, in <listcomp>\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 867, in <lambda>\n section, option, d[option], d)\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 387, in before_get\n self._interpolate_some(parser, option, L, value, section, defaults, 1)\n File "/opt/splunk/etc/apps/TA-securityscorecard/bin/ta_securityscorecard/aob_py3/backports/configparser/__init__.py", line 437, in _interpolate_some\n "found: %r" % (rest,))\nbackports.configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%'\n   Thanks &Regards Raghavendra  
Can anyone tell me the steps to deploy and configure multiple apps in a cluster with heavy forwarders. 
when I type this command in git bash /opt/splunk/bin/splunk apply shcluster-bundle -target   to get cluster status I keep getting the error    "Error, Parameters must be in the form '-parameter value... See more...
when I type this command in git bash /opt/splunk/bin/splunk apply shcluster-bundle -target   to get cluster status I keep getting the error    "Error, Parameters must be in the form '-parameter value'". Can someone tell how to resolve this
Hi,   I want to delete my free Splunk account. I'm not going to call. How do I do this online? Thanks
Hi, I have several datasets that have the exact same format with only the source of the data differing. I've duplicated my macros from the dev environment to the test but I'm receiving no results fo... See more...
Hi, I have several datasets that have the exact same format with only the source of the data differing. I've duplicated my macros from the dev environment to the test but I'm receiving no results for the test macro, despite when I do the tstats search as a datamodel instead I get results. Here is the dev search:  tstats summariesonly=true count("dev_metric.exchangeId") as avg_TPS from datamodel=metric by _time, dev_metric.date span=1s | search "dev_metric.date"=$date$ | stats avg(avg_TPS) as averageTps by dev_metric.date | eval averageTps=round(averageTps/1000,3) | appendpipe [tstats count | where count=0] | fillnull value=0.000 averageTps | fields averageTps   Here is the test search: tstats summariesonly=true count("test_metric.exchangeId") as avg_TPS from datamodel=metric by _time, test_metric.date span=1s | search "test_metric.date"=$date$ | stats avg(avg_TPS) as averageTps by metric.date | eval averageTps=round(averageTps/1000,3) | appendpipe [tstats count | where count=0] | fillnull value=0.000 averageTps | fields averageTps I've checked the dataset and there are the needed events in there, and I've done a | datamodel search equivalent to tstats that worked fine. What could be the reason I'm receiving no results? And what could be some steps to resolve this?
we are currently exploring splunkjs for rendering data in our custom app. we are able to authenticate and display charts based on searches directly from webapp but having difficulty in integrating wi... See more...
we are currently exploring splunkjs for rendering data in our custom app. we are able to authenticate and display charts based on searches directly from webapp but having difficulty in integrating with react app as its not component based. we saw the new Splunk ui/dashboard studio with many react components e.g. splunk/react-ui ,splunk-visualizations we think we can use this react components in our external webapp butwe are not able to see any authentication mechanism in these react components. how can we use these react components in external app which goes against splunk enterprise does authentication fires searches and displays charts. Thanks in advance.  
Hi, I getting the following error when start the container using the command, any idea?         unday 08 August 2021 14:19:09 +0000 (0:00:00.050) 0:05:37.573 ********* TASK [splunk_standa... See more...
Hi, I getting the following error when start the container using the command, any idea?         unday 08 August 2021 14:19:09 +0000 (0:00:00.050) 0:05:37.573 ********* TASK [splunk_standalone : Setup global HEC] ************************************ fatal: [localhost]: FAILED! => { "cache_control": "private", "changed": false, "connection": "Close", "content_length": "130", "content_type": "text/xml; charset=UTF-8", "date": "Sun, 08 Aug 2021 14:19:11 GMT", "elapsed": 0, "redirected": false, "server": "Splunkd", "status": 401, "url": "https://127.0.0.1:8089/services/data/inputs/http/http", "vary": "Cookie, Authorization", "www_authenticate": "Basic realm=\"/splunk\"", "x_content_type_options": "nosniff", "x_frame_options": "SAMEORIGIN" } MSG: Status code was 401 and not [200]: HTTP Error 401: Unauthorized PLAY RECAP ********************************************************************* localhost : ok=56 changed=2 unreachable=0 failed=1 skipped=58 rescued=0 ignored=0 Sunday 08 August 2021 14:19:11 +0000 (0:00:02.151) 0:05:39.725 ********* =============================================================================== splunk_common : Get Splunk status ------------------------------------- 233.48s splunk_common : Start Splunk via CLI ----------------------------------- 48.29s splunk_common : Update Splunk directory owner -------------------------- 20.43s splunk_common : Wait for splunkd management port ----------------------- 10.10s splunk_common : Test basic https endpoint ------------------------------- 4.14s Gathering Facts --------------------------------------------------------- 3.16s splunk_common : Cleanup Splunk runtime files ---------------------------- 2.49s splunk_standalone : Setup global HEC ------------------------------------ 2.15s splunk_common : Check if /sbin/updateetc.sh exists ---------------------- 1.40s splunk_common : Check for scloud ---------------------------------------- 1.38s splunk_common : Start Splunk via service -------------------------------- 1.28s splunk_common : Update /opt/splunk/etc ---------------------------------- 0.90s splunk_common : Find manifests ------------------------------------------ 0.68s splunk_common : include_tasks ------------------------------------------- 0.49s splunk_common : include_tasks ------------------------------------------- 0.46s splunk_common : Remove user-seed.conf ----------------------------------- 0.43s splunk_common : Enable splunktcp input ---------------------------------- 0.39s splunk_common : Check for existing installation ------------------------- 0.38s splunk_common : Ensure license path ------------------------------------- 0.36s splunk_common : Create .ui_login ---------------------------------------- 0.30s # docker run --name splunk-mount -v opt-splunk-etc:/opt/splunk/etc -v opt-splunk-var:/opt/splunk/var -d -p 8000:8000 -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD=password splunk/splunk:latest          
I have Drilldown that show me some Test and this is Onclick:   index=main |where Test="$click.value$"   The problem is when $click.value$ contains  Double quote (")   And then I got an error in... See more...
I have Drilldown that show me some Test and this is Onclick:   index=main |where Test="$click.value$"   The problem is when $click.value$ contains  Double quote (")   And then I got an error in Search screen "Unbalanced quotes"   How to fix it ?
Hi Community , We have integrated our itsi cluster to servicenow and tickets are creating fine.  but recently observed a strange behavior from splunk itsi  that . episodes generated in episode revie... See more...
Hi Community , We have integrated our itsi cluster to servicenow and tickets are creating fine.  but recently observed a strange behavior from splunk itsi  that . episodes generated in episode review will create servicenow incident . once issue resolves episode will get resolved .   But when the same issue happens with same node  , resolved episode count gets increased , instead of creating new notable event and a new episode. itsi logs  doesnot provide much details about this , please help check why .   Best regards Vinay vi323056@wipro.com
Using the Splunk SDK, I am ingesting json data into a splunk index via this line of code:  index.submit(event, host="localhost", sourcetype="covid_vacc_data_ingest") This line of code is working an... See more...
Using the Splunk SDK, I am ingesting json data into a splunk index via this line of code:  index.submit(event, host="localhost", sourcetype="covid_vacc_data_ingest") This line of code is working and data is ingested, but the timestamp is always the ingestion time rather then the date field on the event. Here is a screenshot of my settings in Splunk enterprise for this sourcetype:  Here is a screenshot of what the ingested data looks like:  I want the _time field on the left to be the date field on the right. Any suggestions? Not sure what I am doing wrong. Thank you! 
I understand the "Classic Experience". I understand the new k8s based "Victoria Experience". But my Cloud instances claims to be part of the "Niagara Experience", and I cannot get anyone at splunk- i... See more...
I understand the "Classic Experience". I understand the new k8s based "Victoria Experience". But my Cloud instances claims to be part of the "Niagara Experience", and I cannot get anyone at splunk- including support and including my inside sales rep- to tell me what that is. Bueller? What is "Niagara"? Is the underlying platform k8s as it is with "Victoria"? What customer-facing differences are there between Classic, Victoria, and Niagara? Why isn't Niagara listed on https://docs.splunk.com/Documentation/SplunkCloud/8.2.2106/Admin/Experience?
Hi everyone, I'd like to know if it is possible to have a following example dashboard with a single table panel: For example: column1: src_ip column2: dest_ip column3: MB_downloaded So, this ... See more...
Hi everyone, I'd like to know if it is possible to have a following example dashboard with a single table panel: For example: column1: src_ip column2: dest_ip column3: MB_downloaded So, this is simple, but what I'd like to do, is being able to treat each line and be able to trace what happened. I'd like to do it with 2 additional colums: one with a checkbox: has to be checked if the subject (described in the row) has been acknowledged by the analyst. If the row is ACKed, then it becomes green. Else, it stays red. one with a comment section:  analysis of the row. (example: "John downloaded 10 Mo from google.com, he downloaded a .xlsx file named test.xlsx") Also, is there a way to keep trace of what was acknowledged ? Maybe export every row checked in a lookup ? I guess this needs .js and .css files ? Or can it be done with a simple xml dashboard ?   Thank you in advance !
Hi we are planning to implement a Splunk in our environment, so we need a demo session on APM, RUM and end to end user monitoring
Hello All, I have 3 indexer in cluster and data is being stored in the NAS server. and for one server data is stored in cold logs on a mounted storage. I have copied the data from NAS to 2 serv... See more...
Hello All, I have 3 indexer in cluster and data is being stored in the NAS server. and for one server data is stored in cold logs on a mounted storage. I have copied the data from NAS to 2 server , the one with mounted point is giving me a duplicate error and I am not able to see data copied in the /opt/splunk/var/lib/splunk/accessdb/thaweddb/ is marked as diabled due to conflict. I have tried multiple commands to rebuild the Splunk db in all the indexers. and I am getting error as attached screenshots @ivanreis @lmyrefelt @kmorris_splunk @Masa @jkat54 @493669 @mayurr98 
I'm using HTTP collector on my free trial cloud instance. URLs I tried:  https://inputs.<MY_SPLUNK_INSTANCE_ID>.splunkcloud.com:8088/services/collector/event/1.0  https://inputs.<MY_SPLUNK_INSTA... See more...
I'm using HTTP collector on my free trial cloud instance. URLs I tried:  https://inputs.<MY_SPLUNK_INSTANCE_ID>.splunkcloud.com:8088/services/collector/event/1.0  https://inputs.<MY_SPLUNK_INSTANCE_ID>.splunkcloud.com:8088/services/collector/event  https://inputs.<MY_SPLUNK_INSTANCE_ID>.splunkcloud.com:8088/services/collector Payloads I tried: 1)  {time: -3730851658780559,event: { event: 'test', message: 'localhost event', myts: 1628340011441 }} 2) '{"time":"1628340065.594","event":{"message":"localhost event","severity":"info"}}' Responses I'd get: { text: 'Success', code: 0 } Then, I tried these search queries into my Splunk search app, and I get 0 events: - event.message=* - event=* What is happening?
How to improve Splunk Deployment server scalability?