All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The Kafka TA hasn't been updated since before 8 was released. Is it still supported? Running with 8.0.x and 8.1.x I get this error in the search GUI:     Unable to initialize modular input "kafka_mo... See more...
The Kafka TA hasn't been updated since before 8 was released. Is it still supported? Running with 8.0.x and 8.1.x I get this error in the search GUI:     Unable to initialize modular input "kafka_mod" defined in the app "Splunk_TA_kafka": Introspecting scheme=kafka_mod: script running failed (exited with code 1).. splunkd.log shows 04-06-2021 09:14:08.054 -0400 ERROR ModularInputs - <stderr> Introspecting scheme=kafka_mod: File "/app/splunk/etc/apps/Splunk_TA_kafka/bin/kafka_mod.py", line 67 04-06-2021 09:14:08.054 -0400 ERROR ModularInputs - <stderr> Introspecting scheme=kafka_mod: """.format(c.ta_short_name, desc, kcdl.use_single_instance()) 04-06-2021 09:14:08.054 -0400 ERROR ModularInputs - <stderr> Introspecting scheme=kafka_mod: ^ 04-06-2021 09:14:08.054 -0400 ERROR ModularInputs - <stderr> Introspecting scheme=kafka_mod: SyntaxError: invalid syntax 04-06-2021 09:14:08.138 -0400 ERROR ModularInputs - Introspecting scheme=kafka_mod: script running failed (exited with code 1). 04-06-2021 09:14:08.138 -0400 ERROR ModularInputs - Unable to initialize modular input "kafka_mod" defined in the app "Splunk_TA_kafka": Introspecting scheme=kafka_mod: script running failed (exited with code 1).. EDIT: This is a simple case of not wrapping print function in parens, as Python 3 requires. But that tells me this is not Python 3 aware. Seems this TA has been abandoned.
Unable to see the splunk mint dashboard for sometime. Crash logs are available though. This never happened before.
Here is the search I'm running: index=cdb_summary source=CDM_*_Daily_Summary fismaid=* sourcetype=swam_summary OR sourcetype=hwam_summary | stats sum(TotalManaged) as TotalApplicable,count(eval(Aut... See more...
Here is the search I'm running: index=cdb_summary source=CDM_*_Daily_Summary fismaid=* sourcetype=swam_summary OR sourcetype=hwam_summary | stats sum(TotalManaged) as TotalApplicable,count(eval(AutoFail=="False")) as GoodAssets , sum(NotScanned) as NotScanned,values(FailedCPE) as FailedCPEs, count(FailedCPE) as FailedCPE | eval SWAM_Score=round((TotalApplicable-NotScanned-FailedCPE)/TotalApplicable*100)   I'd like to get results from each day within a given timeframe to use for the ML Toolkit.  I've tried timewrap, but it returns no results. How can I get a search to run this query for each day in a given timeframe?
I am not able to create new connection. Throwing an error "There was an error processing your request. It has been logged (ID ###)". DB connect version: 3.3 MS SQL server version: 12 jTDS driver v... See more...
I am not able to create new connection. Throwing an error "There was an error processing your request. It has been logged (ID ###)". DB connect version: 3.3 MS SQL server version: 12 jTDS driver version: 1.3.1 I tried to find the cause of error from logs (index=_internal sourcetype=dbx_server ID###), shows something like this "ERROR io.dropwizard.jersey.errors.loggingexceptionmapper error handling a request". I don't understand this error. Please suggest solution.
Hi all,   I want to timechart the ouput of my stat command. I know that the _time field must be in the stats command when i add _time in the stats command, _time appears like a multivalue field so ... See more...
Hi all,   I want to timechart the ouput of my stat command. I know that the _time field must be in the stats command when i add _time in the stats command, _time appears like a multivalue field so how can i timechart with this field ? index="X" sourcetype="Y" | stats values(A) AS NOM values(eval(round(B/60))) AS duration distinct_count(C) as cparticipant values(_time) as time by call_id | where duration >=2 and cparticipant>1 | join NOM [| inputlookup D]     here the field _time   Thx
      We need to add users to our (unauthenticated) internal proxy logs. Currently the proxy logs only identity the initiator by IP address. We have DHCP and/or windows des... See more...
      We need to add users to our (unauthenticated) internal proxy logs. Currently the proxy logs only identity the initiator by IP address. We have DHCP and/or windows desktop logs to link the IP to a hostname. We have windows logon events which contain the hostname and user fields. Multiple users are able to log onto certain hosts and indeed might be logged on at the same time (using fast user switching). Has anyone any advice on how to solve this problem at scale (30 million events/hour)
We need to add users to our (unauthenticated) internal proxy logs. Currently the proxy logs only identity the initiator by IP address. We have DHCP and/or windows desktop logs to link the IP to a ho... See more...
We need to add users to our (unauthenticated) internal proxy logs. Currently the proxy logs only identity the initiator by IP address. We have DHCP and/or windows desktop logs to link the IP to a hostname. We have windows logon events which contain the hostname and user fields. Multiple users are able to log onto certain hosts and indeed might be logged on at the same time (using fast user switching). Has anyone any advice on how to solve this problem at scale (30 million events/hour)  
Hello - we are in the process of replacing our HadoopConnect server environment (1 SH, 4 Indexers, 1 cluster master running Splunk Enterprise 7.0.6 on Linux 6) with new Linux 7 hardware and running S... See more...
Hello - we are in the process of replacing our HadoopConnect server environment (1 SH, 4 Indexers, 1 cluster master running Splunk Enterprise 7.0.6 on Linux 6) with new Linux 7 hardware and running Splunk 7.3.3. When attempting to set up the HDFS cluster on the new box through the UI, we get "Failed to get entities' object ''. If I try to configure through the clusters.conf file, it doesn't show up in the UI at all, and when trying to 'explore' the HDFS, I get Error in 'hdfs' command: Failed to get entities' object 'admin/clusters'.   When I run the command directly through the command line it seems to work. bash-4.2$ $HADOOP_HOME/bin/hadoop fs -ls hdfs://xr1ph010:8020/ Found 15 items drwxrwxrwx   - yarn   hadoop                  0 2021-03-22 10:29 hdfs://xr1ph010:8020/app-logs drwxr-xr-x+  - hdfs   hdfs                    0 2020-03-11 08:18 hdfs://xr1ph010:8020/apps drwxr-xr-x   - yarn   hadoop                  0 2016-07-29 20:09 hdfs://xr1ph010:8020/ats drwxrwxrwx   - hdfs   p-l-hdp-birs-x          0 2021-03-18 16:07 hdfs://xr1ph010:8020/benchmarks drwxr-xr-x+  - hdfs   hdfs                    0 2021-03-04 11:25 hdfs://xr1ph010:8020/data drwxrwxr-x   - hdfs   hdfs                    0 2017-02-23 11:02 hdfs://xr1ph010:8020/datascience drwxrwxr-x   - hdfs   hadoop                  0 2015-09-19 00:12 hdfs://xr1ph010:8020/hdp drwxr-xr-x+  - hdfs   hdfs                    0 2015-01-29 21:43 hdfs://xr1ph010:8020/lost+found drwxr-xr-x   - mapred hdfs                    0 2013-11-07 09:59 hdfs://xr1ph010:8020/mapred drwxr-xr-x   - hive   hdfs                    0 2020-01-30 09:46 hdfs://xr1ph010:8020/mnt drwxrwxrwx   - hdfs   hdfs                    0 2013-11-07 09:59 hdfs://xr1ph010:8020/mr-history drwxrwxr-x   - hdfs   hdfs                    0 2016-11-13 01:10 hdfs://xr1ph010:8020/ranger drwxr-xr-x+  - hdfs   hdfs                    0 2021-01-11 15:47 hdfs://xr1ph010:8020/system drwxrwxrwx   - hdfs   hdfs                    0 2021-03-22 09:10 hdfs://xr1ph010:8020/tmp drwxr-xr-x   - hdfs   hdfs                    0 2021-03-11 10:56 hdfs://xr1ph010:8020/user     The two main errors I see in the HadoopConnect log are HCERR2002 and HCERR0501 2021-04-06 08:55:07,864 ERROR hdfs_search_command.py [<module>] [341] - sid=1617713707.4, {"message": "Missing required argument", "id": "HCERR0501", "argument": "uri"} Traceback: Traceback (most recent call last): File "/opt/splunk/etc/apps/HadoopConnect/bin/hdfs_search_command.py", line 330, in <module> hdfs.main() File "/opt/splunk/etc/apps/HadoopConnect/bin/hdfs_search_command.py", line 323, in main self._main_impl() File "/opt/splunk/etc/apps/HadoopConnect/bin/hdfs_search_command.py", line 290, in _main_impl raise HcException(HCERR0501, {'argument':'uri'}) HcException: {"message": "Missing required argument", "id": "HCERR0501", "argument": "uri"} 2021-04-06 08:51:13,583 ERROR hdfs_search_command.py [<module>] [341] - sid=1617713473.3, {"message": "Failed to get entities object", "id": "HCERR2002", "uri": "", "entity_path": "admin/clusters", "search": "", "error": "Unexpected error \"<class 'errors.HcException'>\" from python handler: \"{\"search\": \"\", \"entity_path\": \"\", \"error\": \"'NoneType' object has no attribute 'startswith'\", \"id\": \"HCERR2002\", \"message\": \"Failed to get entities object\", \"uri\": \"/servicesNS/jdoll1/HadoopConnect/configs/conf-clusters\"}\". See splunkd.log for more details."} Traceback: Traceback (most recent call last): File "/opt/splunk/etc/apps/HadoopConnect/bin/hdfs_search_command.py", line 330, in <module> hdfs.main() File "/opt/splunk/etc/apps/HadoopConnect/bin/hdfs_search_command.py", line 323, in main self._main_impl() File "/opt/splunk/etc/apps/HadoopConnect/bin/hdfs_search_command.py", line 293, in _main_impl self._validateURI(k) File "/opt/splunk/etc/apps/HadoopConnect/bin/hdfs_search_command.py", line 71, in _validateURI raise HcException(HCERR2002, {'entity_path':'admin/clusters', 'search':'', 'uri':'', 'error':msg}) HcException: {"message": "Failed to get entities object", "id": "HCERR2002", "uri": "", "entity_path": "admin/clusters", "search": "", "error": "Unexpected error \"<class 'errors.HcException'>\" from python handler: \"{\"search\": \"\", \"entity_path\": \"\", \"error\": \"'NoneType' object has no attribute 'startswith'\", \"id\": \"HCERR2002\", \"message\": \"Failed to get entities object\", \"uri\": \"/servicesNS/jdoll1/HadoopConnect/configs/conf-clusters\"}\". See splunkd.log for more details."}
I see in previous .conf presentations screenshots that show a Topology dashboard within the Microsoft Azure App for Splunk. Specifically .conf20's SEC1059C and .conf19's IT1433 presentations. When I ... See more...
I see in previous .conf presentations screenshots that show a Topology dashboard within the Microsoft Azure App for Splunk. Specifically .conf20's SEC1059C and .conf19's IT1433 presentations. When I download the most recent version 1.1.0 I do not see that listed. Has the dashboard been removed/deprecated? If so is there a way I can get the .xml so that I can attempt to re-create?   @jconger 
I'm trying to install the Events service from the Console host: my configuration  1-  host has been added successfully  and ssh 10.0.30.43 "passwordless"  which is the other host 2- Java_Home ... See more...
I'm trying to install the Events service from the Console host: my configuration  1-  host has been added successfully  and ssh 10.0.30.43 "passwordless"  which is the other host 2- Java_Home variable is defined in both hosts  3- vim /etc/security/limits.d/appdynamics.conf 4- checked the error directory  path and found "/opt/appdynamics/platform/product/orcha/21.2.0.363/orcha-modules/bin/orcha-modules"  5- running both commands from the enterprise console host and both give me the same error   bin/platform-admin.sh install-events-service --profile prod --hosts 10.0.30.43 --data-dir /opt/appdynamics/eventsservice --platform-name AppDPlatform  ./bin/platform-admin.sh submit-job --service events-service --job install --args serviceActionHost=10.0.30.43 profile=Prod  [root@console platform-admin]# bin/platform-admin.sh install-events-service --profile prod --hosts 10.0.30.43 --data-dir /opt/appdynamics/eventsservice --platform-name AppDPlatform Installing Events Service on new nodes. ( 1/ 26) Clean up orphaned Events Service: SUCCESS ( 2/ 26) Register Events Service Lifecycle Listener before installing: SUCCESS ( 3/ 26) Validate and set parameters while installing ES: SUCCESS ( 4/ 26) Setup cluster configuration: SUCCESS ( 5/ 26) Initialize Events service cluster: SUCCESS ( 14/ 26) Set JRE Versions: SUCCESS ( 15/ 26) Check Events Service hostnames: SUCCESS ( 16/ 26) Verify hosts: FAILED Events Service installation failed. Failure occurred: Verify hosts Error message: Task failed: Facts collection on host: 10.0.30.43 as user: root with message: Error occurred while executing the task [facts]. env: /opt/appdynamics/platform/product/orcha/21.2.0.363/orcha-modules/bin/orcha-modules: No such file or directory
Hello splunk community, I want to change the height of the line break in a textbox, that looks like this: <row> <panel id="panel1"> <html> <div style="text-align: left;"> <font ... See more...
Hello splunk community, I want to change the height of the line break in a textbox, that looks like this: <row> <panel id="panel1"> <html> <div style="text-align: left;"> <font size="2" color="#333333"> Some Text<br/> Some more Text<br/> </font> </div> </html> </panel> </row> <row> <panel depends="$alwaysHide$"> <title>CSS</title> <html> <style> #panel1 {width:70% !important;background-color:#eeeeee;} #panel1 .dashboard-panel{background-color:#eeeeee;} <!--Next line is not working as intended--> #panel1 br {line-height: 50%;} </style> </html> </panel> </row>   Presumably I have to change the last line but I cannot get it to work properly. I want to make the line breaks more 'narrow' because right now there is too much space above a new line. Help would be appreciated. Cheers gerbert
Greeting folks, I've read somewhere that SplunkJS will no longer be supported, instead, will be replaced by Splunk react, I don't remember where I read it but can someone confirm or deny such an inf... See more...
Greeting folks, I've read somewhere that SplunkJS will no longer be supported, instead, will be replaced by Splunk react, I don't remember where I read it but can someone confirm or deny such an information and refer me an official link please?  Happy splunking  
Hi, I was trying to monitor a service using customized service endpoints for a Java Agent. But i am unable to view that service in that particular endpoints as it is getting masked by a parent busin... See more...
Hi, I was trying to monitor a service using customized service endpoints for a Java Agent. But i am unable to view that service in that particular endpoints as it is getting masked by a parent business transaction. I want to monitor that service independent of that parent  business transaction. Anyone has an idea? Thanks Chirag Hasija
I am trying to strip the Syslog header from the Zeek data that I have coming in as the Corelight TA only likes the raw zeek files. At the moment I have (on a clustered network) -on the indexers in /... See more...
I am trying to strip the Syslog header from the Zeek data that I have coming in as the Corelight TA only likes the raw zeek files. At the moment I have (on a clustered network) -on the indexers in /opt/splunk/etc/system/local the following transforms.conf and below that the props.conf:   transforms.conf:   [syslog-header-stripper-ts-host] REGEX = ^<\d+>[A-Z][a-z]+\s+\d+\s+\d+:\d+:\d+\s[^\s]*\s\S+:\s(.*)$ FORMAT = $1 DEST_KEY = _raw   props.conf: [syslog] # For zeek data - stripping the syslog header TRANSFORMS-strip-syslog = syslog-header-stripper-ts-host This doesn't seem to work for the data - as it is still arriving at the Search Heads with the Syslog header on it. Do I need to put these onto the Search Heads instead? Or does the props and transforms need editing?
we are looking for the way to integrate the Git Hub(azure) logs (activities/admin actions ) with Splunk (on prem) what will be the best way  ? 
Hi All, Please let me know how to set the average response time between two tiers? Example- Consider A as Tier1 and B as Tier2, I need to set the health rule if the average response between these t... See more...
Hi All, Please let me know how to set the average response time between two tiers? Example- Consider A as Tier1 and B as Tier2, I need to set the health rule if the average response between these two tiers is greater than 120ms? Regards, Madhusri R
What is the difference between earliest=-5min and earliest=-5min@min
Hi All, I have KV store with 1.5 million records(which isn't much for a kvstore) , have about 20 fields. I am experiencing performance issues in terms of retrieving data from the KVstore, like if i... See more...
Hi All, I have KV store with 1.5 million records(which isn't much for a kvstore) , have about 20 fields. I am experiencing performance issues in terms of retrieving data from the KVstore, like if i just run a basic query to lookup a record using _key it is taking about 120 seconds to give me back the result. In some of my saved searches i am using subsearch where i pull up the latest update timestamp from kvsotre, this subsearch is getting finalized as it is not able to complete execution in 60 seconds. Error message is "The search auto-finalized after it reached its time limit: 60 seconds." I tried limiting the number of rows using updated timestamp but that is not helping either nor do top or head commands are helping. I have tried accelerating the fields that are used in my saved searches but that did not help. Question that i have for the group is, what are my options to expedite the data retrieval from the kvstore? I know kvstore uses mongo db behind the scenes, hoping if some one already have used some of mongo db ways to expedite or any other solution that can be applied. Truncating the data from the kvstore is not an option.  Note _key is alphanumeric value. Looking forward to your inputs. Thank you in advance.
Hi,  I have 3 products 1, 2, and 3, each of them contain several elements a, b c, d. Each product has different specification depending on the elements % Product 1:  a1<a<a2, b1<b<b2, c1<c<c2 Prod... See more...
Hi,  I have 3 products 1, 2, and 3, each of them contain several elements a, b c, d. Each product has different specification depending on the elements % Product 1:  a1<a<a2, b1<b<b2, c1<c<c2 Product 2:  a3<a<a4, b3<b<b4, d3<d<d4 product 3: a5<a<a6, b5<b<b6, c5<c<c6, d5<d<d6 I would like to have a list  Product , a, b, c, d, In_Spec I would like to use eval to assign the value to In_Spec |eval In_Spec=( if Product=1 and  a1<a<a2 and b1<b<b2 and c1<c<c2, "yes", "no") but How can include product 2 and product 2? In the end I want sth like: |eval In_Spec=( if Product 1.......... Product2........... Product 3............"yes", "no") can someone help me with that? Many thanks in advance!
Hello, I have a dropdown that I need to be filled depending other dropdown Dropdown1: A->X B->Y,Z C->W,Z If I selected A in the first one, I need that appear already selected X in the second dr... See more...
Hello, I have a dropdown that I need to be filled depending other dropdown Dropdown1: A->X B->Y,Z C->W,Z If I selected A in the first one, I need that appear already selected X in the second dropdown If I selected B in the second one, I need that appear already selected (change from previous selection) to Y,Z in the second dropdown If I selected C in the thirstone, I need that appear already selected (change from previous selection) to W,Z in the second dropdown I tried with a token in the first dropdown and put the follow code in the second dropdown <default>$token$</default> It would work with only ONE value, but I not achieve that work for B and C that have 2 values Could you please help me with It? Thanks a lot