All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| makeresults | eval myInput="*" | append [ | search "my search related to query 1" | rex field=_raw "Job id : (?<job_id>[^,]+)" | eval query_type=if(myInput="*", "query1", null()) |... See more...
| makeresults | eval myInput="*" | append [ | search "my search related to query 1" | rex field=_raw "Job id : (?<job_id>[^,]+)" | eval query_type=if(myInput="*", "query1", null()) | where query_type="query1" | table job_id, query_type, myInput ] | append [ | search "my search related to query 2" | rex field=_raw "Job id : (?<job_id>[^,]+)" | eval query_type=if(myInput!="*", "query2", null()) | where query_type="query2" | table job_id, query_type, myInput ]
I have not had the issue where I can't see data @Strangertinz    Pravin
Afternoon, Splunkers! Timechart is really frothing my coffee today. When putting in the parameters for a timechart, it always cuts off the latest time value. For example, if I give it a time window... See more...
Afternoon, Splunkers! Timechart is really frothing my coffee today. When putting in the parameters for a timechart, it always cuts off the latest time value. For example, if I give it a time window of four hours with a span of 1h, I get a total of four data points: 12:00:00 13:00:00 14:00:00 15:00:00 I didn't ask for four data points, I asked for the data points from 12:00 to 16:00. And in this particular example, no, 16:00 isn't a time that hasn't arrived yet or only has partial data; it does this with any time range I pick, at any span setting. Now, I can work around this by programming the dashboard to add 1 second to the <latest> time for the time range. Not that huge of a deal. However, I'm left with a large void on the right-hand side of the time range. Is there anyway I can fix this, either by forcing the timechart to show me the whole range or by hiding the empty range?
Hi @_pravin you can correct the issue by going to latest version of db_connect assuming you are running 9.2.x splunk.  I am still dealing with the issue of the latest version not seeing data se... See more...
Hi @_pravin you can correct the issue by going to latest version of db_connect assuming you are running 9.2.x splunk.  I am still dealing with the issue of the latest version not seeing data send from my db_connect.  Has this ever happened to you and how did you resolve it? I cant find any error log to point to the main issue 
Thank you, but I was wanting to learn where the random text "THE_TERM" comes from and how it gets into the query.
I have two query in splunk query 1 and query 2 and an input. Based on the input, i need to execute either query 1 or query 2. I am trying something like below query but it is not working for me. ... See more...
I have two query in splunk query 1 and query 2 and an input. Based on the input, i need to execute either query 1 or query 2. I am trying something like below query but it is not working for me.   | makeresults | eval myInput="*" | append [ search "my search related to query 1" | rex field=_raw "Job id : (?<job_id>[^,]+)" | where myInput="*" | eval query_type="query1" | table job_id, query_type, myInput ] | append [ search "my search related to query 2" | rex field=_raw "Job id : (?<job_id>[^,]+)" | where myInput!="*" | eval query_type="query2" | table job_id, query_type, myInput ]  
Splunk version 9.0.8/9.1.3/9.2.x and above has added capability to process key value pairs that will be added at index time on all events flowing through the input.  Now it's possible to "tag" a... See more...
Splunk version 9.0.8/9.1.3/9.2.x and above has added capability to process key value pairs that will be added at index time on all events flowing through the input.  Now it's possible to "tag" all data coming into a particular HEC token. HEC will support all present and future inputs.conf.spec configs(_meta/TCP_ROUTING/SYSLOG_ROUTING/queue etc.).
Siga os comandos abaixo no linux, compare com os servidores existentes, e aplique o timezone desejado # MUDAR TIMEZONE DO SERVICOR via linux # identifica timezone do servidor timedatectl # lista... See more...
Siga os comandos abaixo no linux, compare com os servidores existentes, e aplique o timezone desejado # MUDAR TIMEZONE DO SERVICOR via linux # identifica timezone do servidor timedatectl # lista os timezones timedatectl list-timezones # configura o timezone desejado sudo timedatectl set-timezone America/Sao_Paulo  
any suggestions please... i need to capture 2 datasets and see if there is anything that is repeating  
Hi, a few days ago, I installed the UF in an AIX server but it had some details, such as the service running, but then it stops after some time . The version from UF is 9.0.5
Hello, everyone. I'm looking to achieve this in Splunk Modern Dashboard; It seems native and intuitive to normal Splunk search, but the paint brush is not available in Dashboard, and I found no ... See more...
Hello, everyone. I'm looking to achieve this in Splunk Modern Dashboard; It seems native and intuitive to normal Splunk search, but the paint brush is not available in Dashboard, and I found no useful option in the edit tools. Could this be achieved via source code or any other way? Thanks to anyone who may help
Hi @lbrhyne, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that... See more...
Hi @lbrhyne, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi @bheptinstall, Did you ever figure this out or get a reason to what the cause is? Thanks,
Background: I've created a small function in a spark/Databricks notebook that uses Splunk's splunk-sdk  package.  The original intention was to call Splunk to execute a search/query, but for the sak... See more...
Background: I've created a small function in a spark/Databricks notebook that uses Splunk's splunk-sdk  package.  The original intention was to call Splunk to execute a search/query, but for the sake of simplicity while testing this issue,  the function only prints properties of the service object.           import splunklib.results as results def get_useragent_string(hostname: str, srcports: int, destip: str, earliest:float): return ("connected to %s" % service.info['host'])           The function works fine when I call it from a Python cell in a notebook.           get_useragent_string(hostname="XXX",srcports=49738,destip="104.16.184.241",earliest=1730235533)           The function throws this exception when I try it from a SQL statement           %sql SELECT get_useragent_string('xxx', 49738, '104.16.184.241', 1730235533)           Any idea what is different between Python and SQL execution? I think this is the salient part of the exception, but don't understand why this would be different in SQL vs Python.           ^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1160, in _spliturl if host.startswith('[') and host.endswith(']'): host = host[1:-1] ^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'startswith'           Full exception is below           org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 574.0 failed 4 times, most recent failure: Lost task 0.3 in stage 574.0 (TID 1914) (10.244.27.5 executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/root/.ipykernel/141000/command-3792261566929826-3142948785", line 17, in get_useragent_string File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/client.py", line 3199, in oneshot return self.post(search=query, ^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/client.py", line 941, in post return self.service.post(path, owner=owner, app=app, sharing=sharing, **query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 322, in wrapper except HTTPError as he: File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 76, in new_f val = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 816, in post response = self.http.post(path, all_headers, **query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1315, in post return self.request(url, message) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1338, in request raise File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1472, in request scheme, host, port, path = _spliturl(url) ^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1160, in _spliturl if host.startswith('[') and host.endswith(']'): host = host[1:-1] ^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'startswith' at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:551) at org.apache.spark.sql.execution.python.BasePythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:115) at org.apache.spark.sql.execution.python.BasePythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:98) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:507) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage9.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:50) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$5(UnsafeRowBatchUtils.scala:88) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$3(UnsafeRowBatchUtils.scala:88) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$1(UnsafeRowBatchUtils.scala:68) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:62) at org.apache.spark.sql.execution.collect.Collector.$anonfun$processFunc$2(Collector.scala:214) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:211) at org.apache.spark.scheduler.Task.doRunTask(Task.scala:199) at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:161) at com.databricks.unity.EmptyHandle$.runWithAndClose(UCSHandle.scala:134) at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:155) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.Task.run(Task.scala:102) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:1036) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:110) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:1039) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:926) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.$anonfun$failJobAndIndependentStages$1(DAGScheduler.scala:3998) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:3996) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:3910) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:3897) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:3897) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1758) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1741) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1741) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:4256) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:4159) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:4145) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:55) at org.apache.spark.scheduler.DAGScheduler.$anonfun$runJob$1(DAGScheduler.scala:1404) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1392) at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:3153) at org.apache.spark.sql.execution.collect.Collector.$anonfun$runSparkJobs$1(Collector.scala:355) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:299) at org.apache.spark.sql.execution.collect.Collector.$anonfun$collect$1(Collector.scala:384) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:381) at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:122) at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:131) at org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:94) at org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:90) at org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:78) at org.apache.spark.sql.execution.qrc.ResultCacheManager.$anonfun$computeResult$1(ResultCacheManager.scala:552) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.qrc.ResultCacheManager.collectResult$1(ResultCacheManager.scala:546) at org.apache.spark.sql.execution.qrc.ResultCacheManager.$anonfun$computeResult$2(ResultCacheManager.scala:561) at org.apache.spark.sql.execution.adaptive.ResultQueryStageExec.$anonfun$doMaterialize$1(QueryStageExec.scala:663) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1184) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$8(SQLExecution.scala:874) at com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$7(SQLExecution.scala:874) at com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$6(SQLExecution.scala:874) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$5(SQLExecution.scala:873) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$4(SQLExecution.scala:872) at com.databricks.sql.transaction.tahoe.ConcurrencyHelpers$.withOptimisticTransaction(ConcurrencyHelpers.scala:57) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$3(SQLExecution.scala:871) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:195) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$2(SQLExecution.scala:870) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$1(SQLExecution.scala:855) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) at org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.$anonfun$run$1(SparkThreadLocalForwardingThreadPoolExecutor.scala:157) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48) at org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$4(SparkThreadLocalForwardingThreadPoolExecutor.scala:113) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51) at org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:112) at org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured$(SparkThreadLocalForwardingThreadPoolExecutor.scala:89) at org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:154) at org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.run(SparkThreadLocalForwardingThreadPoolExecutor.scala:157) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/root/.ipykernel/141000/command-3792261566929826-3142948785", line 17, in get_useragent_string File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/client.py", line 3199, in oneshot return self.post(search=query, ^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/client.py", line 941, in post return self.service.post(path, owner=owner, app=app, sharing=sharing, **query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 322, in wrapper except HTTPError as he: File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 76, in new_f val = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 816, in post response = self.http.post(path, all_headers, **query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1315, in post return self.request(url, message) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1338, in request raise File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1472, in request scheme, host, port, path = _spliturl(url) ^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1160, in _spliturl if host.startswith('[') and host.endswith(']'): host = host[1:-1] ^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'startswith' at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:551) at org.apache.spark.sql.execution.python.BasePythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:115) at org.apache.spark.sql.execution.python.BasePythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:98) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:507) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage9.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:50) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$5(UnsafeRowBatchUtils.scala:88) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$3(UnsafeRowBatchUtils.scala:88) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$1(UnsafeRowBatchUtils.scala:68) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:62) at org.apache.spark.sql.execution.collect.Collector.$anonfun$processFunc$2(Collector.scala:214) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:211) at org.apache.spark.scheduler.Task.doRunTask(Task.scala:199) at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:161) at com.databricks.unity.EmptyHandle$.runWithAndClose(UCSHandle.scala:134) at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:155) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.Task.run(Task.scala:102) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:1036) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:110) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:1039) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:926) ... 3 more            
@richgalloway  Did you or anyone else in this post ever end up creating the full script to get recent versions from  "https://splunkbase.splunk.com/api/v1/app/<UUID>/release/" Endpoint ?   
Hi @Strangertinz ,   I tried the same steps done by you, going back and forth versions and had the same issues as mentioned. I think it's a bug in the splunk db connect app.   Thanks, Pravin
The term is being queried out of the _raw.  Which is also the field "Log"
Use "blacklist" in the inputs.conf instead.
It is not always easy to decipher what your search is trying to do without some sample representative anonymised events and expected results to see what your searches are doing. Please can you provid... See more...
It is not always easy to decipher what your search is trying to do without some sample representative anonymised events and expected results to see what your searches are doing. Please can you provide some events, preferably using the code block </> button to insert them into your reply.
I found the solution.  It was Pass4symmkey issue, I had some special charaters in my pass4symmkey due to which there was a difference is passwords. to check use below commands. splunk btool serv... See more...
I found the solution.  It was Pass4symmkey issue, I had some special charaters in my pass4symmkey due to which there was a difference is passwords. to check use below commands. splunk btool server list shcluster --debug | grep -i pass4symmkey ( this will give you your password on spunk) splunk show-decrypted --value 'pass4symmkey' (this will decrypt the key)   now verify the pass key it will surely be a missmatch.