All Topics

Top

All Topics

Hi, a few days ago, I installed the UF in an AIX server but it had some details, such as the service running, but then it stops after some time . The version from UF is 9.0.5
Hello, everyone. I'm looking to achieve this in Splunk Modern Dashboard; It seems native and intuitive to normal Splunk search, but the paint brush is not available in Dashboard, and I found no ... See more...
Hello, everyone. I'm looking to achieve this in Splunk Modern Dashboard; It seems native and intuitive to normal Splunk search, but the paint brush is not available in Dashboard, and I found no useful option in the edit tools. Could this be achieved via source code or any other way? Thanks to anyone who may help
Background: I've created a small function in a spark/Databricks notebook that uses Splunk's splunk-sdk  package.  The original intention was to call Splunk to execute a search/query, but for the sak... See more...
Background: I've created a small function in a spark/Databricks notebook that uses Splunk's splunk-sdk  package.  The original intention was to call Splunk to execute a search/query, but for the sake of simplicity while testing this issue,  the function only prints properties of the service object.           import splunklib.results as results def get_useragent_string(hostname: str, srcports: int, destip: str, earliest:float): return ("connected to %s" % service.info['host'])           The function works fine when I call it from a Python cell in a notebook.           get_useragent_string(hostname="XXX",srcports=49738,destip="104.16.184.241",earliest=1730235533)           The function throws this exception when I try it from a SQL statement           %sql SELECT get_useragent_string('xxx', 49738, '104.16.184.241', 1730235533)           Any idea what is different between Python and SQL execution? I think this is the salient part of the exception, but don't understand why this would be different in SQL vs Python.           ^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1160, in _spliturl if host.startswith('[') and host.endswith(']'): host = host[1:-1] ^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'startswith'           Full exception is below           org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 574.0 failed 4 times, most recent failure: Lost task 0.3 in stage 574.0 (TID 1914) (10.244.27.5 executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/root/.ipykernel/141000/command-3792261566929826-3142948785", line 17, in get_useragent_string File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/client.py", line 3199, in oneshot return self.post(search=query, ^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/client.py", line 941, in post return self.service.post(path, owner=owner, app=app, sharing=sharing, **query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 322, in wrapper except HTTPError as he: File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 76, in new_f val = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 816, in post response = self.http.post(path, all_headers, **query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1315, in post return self.request(url, message) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1338, in request raise File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1472, in request scheme, host, port, path = _spliturl(url) ^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1160, in _spliturl if host.startswith('[') and host.endswith(']'): host = host[1:-1] ^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'startswith' at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:551) at org.apache.spark.sql.execution.python.BasePythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:115) at org.apache.spark.sql.execution.python.BasePythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:98) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:507) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage9.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:50) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$5(UnsafeRowBatchUtils.scala:88) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$3(UnsafeRowBatchUtils.scala:88) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$1(UnsafeRowBatchUtils.scala:68) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:62) at org.apache.spark.sql.execution.collect.Collector.$anonfun$processFunc$2(Collector.scala:214) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:211) at org.apache.spark.scheduler.Task.doRunTask(Task.scala:199) at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:161) at com.databricks.unity.EmptyHandle$.runWithAndClose(UCSHandle.scala:134) at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:155) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.Task.run(Task.scala:102) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:1036) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:110) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:1039) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:926) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.$anonfun$failJobAndIndependentStages$1(DAGScheduler.scala:3998) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:3996) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:3910) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:3897) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:3897) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1758) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1741) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1741) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:4256) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:4159) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:4145) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:55) at org.apache.spark.scheduler.DAGScheduler.$anonfun$runJob$1(DAGScheduler.scala:1404) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1392) at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:3153) at org.apache.spark.sql.execution.collect.Collector.$anonfun$runSparkJobs$1(Collector.scala:355) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:299) at org.apache.spark.sql.execution.collect.Collector.$anonfun$collect$1(Collector.scala:384) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:381) at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:122) at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:131) at org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:94) at org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:90) at org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:78) at org.apache.spark.sql.execution.qrc.ResultCacheManager.$anonfun$computeResult$1(ResultCacheManager.scala:552) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.qrc.ResultCacheManager.collectResult$1(ResultCacheManager.scala:546) at org.apache.spark.sql.execution.qrc.ResultCacheManager.$anonfun$computeResult$2(ResultCacheManager.scala:561) at org.apache.spark.sql.execution.adaptive.ResultQueryStageExec.$anonfun$doMaterialize$1(QueryStageExec.scala:663) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1184) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$8(SQLExecution.scala:874) at com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$7(SQLExecution.scala:874) at com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$6(SQLExecution.scala:874) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$5(SQLExecution.scala:873) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$4(SQLExecution.scala:872) at com.databricks.sql.transaction.tahoe.ConcurrencyHelpers$.withOptimisticTransaction(ConcurrencyHelpers.scala:57) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$3(SQLExecution.scala:871) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:195) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$2(SQLExecution.scala:870) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$1(SQLExecution.scala:855) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) at org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.$anonfun$run$1(SparkThreadLocalForwardingThreadPoolExecutor.scala:157) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48) at org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$4(SparkThreadLocalForwardingThreadPoolExecutor.scala:113) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51) at org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:112) at org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured$(SparkThreadLocalForwardingThreadPoolExecutor.scala:89) at org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:154) at org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.run(SparkThreadLocalForwardingThreadPoolExecutor.scala:157) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/root/.ipykernel/141000/command-3792261566929826-3142948785", line 17, in get_useragent_string File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/client.py", line 3199, in oneshot return self.post(search=query, ^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/client.py", line 941, in post return self.service.post(path, owner=owner, app=app, sharing=sharing, **query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 322, in wrapper except HTTPError as he: File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 76, in new_f val = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 816, in post response = self.http.post(path, all_headers, **query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1315, in post return self.request(url, message) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1338, in request raise File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1472, in request scheme, host, port, path = _spliturl(url) ^^^^^^^^^^^^^^ File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-2b093fd4-a851-4495-b00b-d0a105a66366/lib/python3.11/site-packages/splunklib/binding.py", line 1160, in _spliturl if host.startswith('[') and host.endswith(']'): host = host[1:-1] ^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'startswith' at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:551) at org.apache.spark.sql.execution.python.BasePythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:115) at org.apache.spark.sql.execution.python.BasePythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:98) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:507) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage9.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:50) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$5(UnsafeRowBatchUtils.scala:88) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$3(UnsafeRowBatchUtils.scala:88) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.$anonfun$encodeUnsafeRows$1(UnsafeRowBatchUtils.scala:68) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:62) at org.apache.spark.sql.execution.collect.Collector.$anonfun$processFunc$2(Collector.scala:214) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:211) at org.apache.spark.scheduler.Task.doRunTask(Task.scala:199) at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:161) at com.databricks.unity.EmptyHandle$.runWithAndClose(UCSHandle.scala:134) at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:155) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.Task.run(Task.scala:102) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:1036) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:110) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:1039) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:926) ... 3 more            
Hi all, I want to send logs (which are part from our sourcetype [kube_audit]) from my HeavyForwarder to a third-party system (in my case SIEM) in syslog-format, and only those, which are caught with... See more...
Hi all, I want to send logs (which are part from our sourcetype [kube_audit]) from my HeavyForwarder to a third-party system (in my case SIEM) in syslog-format, and only those, which are caught with the regex defined. Everything else should be sent normally to my Indexers. There exists a documentation, but for my use-case there is no further description. (https://docs.splunk.com/Documentation/Splunk/9.1.3/Forwarding/Routeandfilterdatad#Filter_and_route_event_data_to_target_groups , https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd ) I tried to follow the documentation and tried many things. But I end up with my third-party host receiving ALL logs of my sourcetype [kube_audit] instead only a part of it. I checked my regex, as I suspected this would be my point of failure, but there must be some other configurations I am missing, as in a simple setup, the regex works as it is. My setup for outputs, transforms and props.conf: props.conf:   [kube_audit] TRANSFORMS-routing = route_to_sentinel   transforms.conf:   [route_to_sentinel] REGEX = (?<sentinel>"verb":"create".*"impersonatedUser".*"objectRef":\{"resource":"pods".*"subresource":"exec") DEST_KEY = _SYSLOG_ROUTING FORMAT = sentinel_forwarders   outputs.conf:   [tcpout] defaultGroup = my_indexers forwardedindex.filter.disable = true indexAndForward = false useACK = true backoffOnFailure = 5 connectionTTL = 3500 writeTimeout = 100 maxConnectionsPerIndexer = 20 [tcpout:my_indexers] server=<list_of_servers> [syslog] defaultGroup = sentinel_forwarders [syslog:sentinel_forwarders] server = mythirdpartyhost:514 type = udp   Am I missing something? Any notable things I did miss? Any help is appreciated!   Best regards
I'm creating a form that requires a date input but rather than type the date and to avoid risk of typos and errors, I want to use this month view from the time picker:   Had a working version w... See more...
I'm creating a form that requires a date input but rather than type the date and to avoid risk of typos and errors, I want to use this month view from the time picker:   Had a working version with a html page but we are now 9.2.3 so no longer available.... Field will only ever require a single date, never a time, never a range, never realtime, etc. Will also ensure the correct format (Day-Month-Year) is used  - looking at you,  America    Thanks
I'm using `Splunk Add-on for Box` to collect box logging data. As a premise, `box:events' contains information for `uploaded`, `deleted`, `downloaded`, `source_item_id`, `source_parent_id` events, w... See more...
I'm using `Splunk Add-on for Box` to collect box logging data. As a premise, `box:events' contains information for `uploaded`, `deleted`, `downloaded`, `source_item_id`, `source_parent_id` events, where `source_item_id` means file id, and `source_parent_id` means its folder id. The `box:file` contains `file id`, `location` events. The `box:folder` contains `folder id`, `location` events. My purpose is to resolve folder location from `box:evets` file's `some action' event. I can resolve by `box:file' with one outer join SPL like this. search index="box" sourcetype="box:events" event_type=DOWNLOAD earliest=-1d | field _time source_item_name source_item_id source_parent_id | join type=outer left=L right=R where L.source_item_id=L.id [ search index=box sourcetype="box:file" earliest=-1 | field id location ] | table L._time L.source_item_name R.location And I can do with `box:folder` like this. search index="box" sourcetype="box:events" event_type=DOWNLOAD earliest=-1d | field _time source_item_name source_item_id source_parent_id | join type=outer left=L right=R where L.source_parent_id=L.id [ search index=box sourcetype="box:folder" earliest=-1 | field id location ] | table L._time L.source_item_name R.location   But I don't know how integrate above two SPL into one. Please tell me some idea. Thanks in advance.
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our pl... See more...
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our platforms. This is to get the signin details into the platform - as users might have multiple email addresses I want them all.       index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity         This is to check all leavers in snow       index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | table _time affect_dest active description dv_state number       Unfortunately the Shub does not add the email in the description and only user names and surnames. So I would need to search the first querys 'first' 'last' against the second query to find leavers. this is what I tried but it does not work.       index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog | fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | search "*first*" "*last*" [ search index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | table _time affect_dest active description dv_state number ] | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity      
index=web_logs sourcetype=access_combined | eval request_duration=round(duration/1000, 2) | stats avg(request_duration) as avg_duration by host, uri_path | where avg_duration > 2 | sort - avg_dur... See more...
index=web_logs sourcetype=access_combined | eval request_duration=round(duration/1000, 2) | stats avg(request_duration) as avg_duration by host, uri_path | where avg_duration > 2 | sort - avg_duration
Hi All I have a search string ... index="ee_apigee" vhost="rbs" uri="/eforms/v1.0/cb/*" | rex "(?i) .*?=\"(?P<httpstatus>\d+)(?=\")" | bucket _time span=day | stats count by _time, httpstatus | e... See more...
Hi All I have a search string ... index="ee_apigee" vhost="rbs" uri="/eforms/v1.0/cb/*" | rex "(?i) .*?=\"(?P<httpstatus>\d+)(?=\")" | bucket _time span=day | stats count by _time, httpstatus | eventstats sum(count) as total | eval percent = (count/total)*100 . " %" | fields - total ...whose percent field is showing a percentage over entire period searched and not just the 'day'.  How can above be modified to give percentage per day for each httpstatus?
Hi all After installing Splunk_TA_nix with no local/inputs on heavy forwarders the error I was seeing in this post went away. So that one was actually solved. However, the issue with missing linebr... See more...
Hi all After installing Splunk_TA_nix with no local/inputs on heavy forwarders the error I was seeing in this post went away. So that one was actually solved. However, the issue with missing linebreaks in the output mentionen by @PickleRick remains. "1) Breaks the whole lastlog output into separate events on the default LINE_BREAKER (which means every line is treated as separate event)" So I thought I'd see if I could get that one confirmed and/or fixed as well When searching for "source=lastlog" right now I get get a list of events from each host like so: > user2 10.0.0.1 Wed Oct 30 11:20 > another_user 10.0.0.1 Wed Oct 30 11:21 > discovery 10.0.0.2 Tue Oct 29 22:19 > scanner 10.0.0.3 Mon Oct 28 21:39 > admin_user 10.0.0.4 Mon Oct 21 11:19 > root 10.0.0.1 Tue Oct 1 08:57 Before placing the TA on the HFs I would see output only containing the header > USERNAME FROM LATEST Which is completely useless  After adding the TA to the HFs this "header" line is no longer present, at all, in any events from any server. While Field names are correct and fully searchable with IP adresses, usernames etc. My question at this point is probably best formulated as "am I alright now"?  Based on the feedback in the previous post I was sort of assuming that the expected output/events should be the same as the screen output when running the script locally, i.e. one event with the entire output, like so USERNAME FROM LATEST user2 10.0.0.1 Wed Oct 30 11:20 another_user 10.0.0.1 Wed Oct 30 11:21 discovery 10.0.0.2 Tue Oct 29 22:19 scanner 10.0.0.3 Mon Oct 28 21:39 admin_user 10.0.0.4 Mon Oct 21 11:19 root 10.0.0.1 Tue Oct 1 08:57 While I can see this as being easier on the eyes and easier to interpret when found, it could make processing individual filed:value pairs more problematic in searches. So what I am wondering, is everything "OK" now? Or am I still getting events with incorrect linebreaks? I don't know what the expected/correct output should be. Best regards
Hello, How to collect DNS logs from Active Directory where the domain controllers have a DNS role
I have 3 new splunk enterprise. 2 are acting as search heads and 1 is acting as deployer. I have successfully made them as a cluster and can even push user and search configs but when i push app or ... See more...
I have 3 new splunk enterprise. 2 are acting as search heads and 1 is acting as deployer. I have successfully made them as a cluster and can even push user and search configs but when i push app or add-ons from deployer i get below error.   Error in pre-deploy check, uri=https://x.x.x.x8089/services/shcluster/captain/kvstore-upgrade/status, status=401, error=No error even the password is correct for deployer which is also same for both SHs. What could be the issue here   PS: I know splunk recommend to use 3 SH and 1 deployer. I tried it as well but have same issue.
I was trying to install splunk soar on a CentOS 9 machine, but I'm getting this error: Unable to read CentOS/RHEL version from /etc/redhat-release. I think, it is due to the end of life of CentOS 7... See more...
I was trying to install splunk soar on a CentOS 9 machine, but I'm getting this error: Unable to read CentOS/RHEL version from /etc/redhat-release. I think, it is due to the end of life of CentOS 7 and 8, and the provided  installation for splunk soar are supported on these versions only. What should I do?
Hi Splunkers, How can I create a single value field based on multiple fields? Also, let's assume that the field names can be  sample_1_country_1_name to sample_99_country_1_name and sample_1_coun... See more...
Hi Splunkers, How can I create a single value field based on multiple fields? Also, let's assume that the field names can be  sample_1_country_1_name to sample_99_country_1_name and sample_1_country_1_name to sample_1_country_99_name. Example: sample_1_country sample_2_country sample_99_country sample_37_country Denmark Chile Thailand Croatia Result sample_country_name Denmark, Chile, Thailand, Croatia Thanks!  
Hello everyone, I'm currently collecting logs from a Fortigate WAF using Syslog, but I've encountered an issue where, after running smoothly for a while, the Splunk Heavy Forwarder (HF) suddenly... See more...
Hello everyone, I'm currently collecting logs from a Fortigate WAF using Syslog, but I've encountered an issue where, after running smoothly for a while, the Splunk Heavy Forwarder (HF) suddenly stops receiving and forwarding the logs. The only way to resolve this is by restarting the HF, after which everything works fine again, but the problem eventually recurs. Could anyone advise on: Possible causes for this intermittent log collection issue Any specific configurations to keep the Syslog input stable Troubleshooting steps or recommended best practices to prevent having to restart the HF frequently Any insights or similar experiences would be much appreciated! Thank you!
In Our environment: Technologies observability --IBM Sterling File Gateway AppDynamics Agent--Java 23.12 Problem statement-- AppDynamics is not able to discover BT's. and IBM SFG vendor is not agr... See more...
In Our environment: Technologies observability --IBM Sterling File Gateway AppDynamics Agent--Java 23.12 Problem statement-- AppDynamics is not able to discover BT's. and IBM SFG vendor is not agree to share Class name and method name with Cisco tools. Can someone please support to discover the BT's for SFG. Appreciate for support here. 
I have data like this in splunk search 2024-10-29 20:14:49 (715) worker.6 worker.6 txid=XXXX JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1167 T... See more...
I have data like this in splunk search 2024-10-29 20:14:49 (715) worker.6 worker.6 txid=XXXX JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1167 Total records archived: 2167 Total related records archived: 1167 2024-10-29 20:13:17 (337) worker.0 worker.0 txid=YYYY JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1066 Total records archived: 2066 Total related records archived: 1066   How can i prepare a table as below ? Basically prepare  a list of tables and sum of their counts between text "Total records archived per table:" and "Total records archived: " sn_vul_vulnerable_item:2000 sn_vul_detection:2233   This is what i have so far node=* "Total records archived per table" "Total related records archived:" | rex field=_raw "Total records archived per table ((?m)[^\r\n]+)(?<tc_table>\S+): (?<tc_archived_count>\d+) Total related records archived:"
Hello Splunkers,    I'm having a inputput dropdown field, when i'm selecting "*" in that input dropdown field, I need to pass base search 1 to all searches in dashboard, when I'm selecting any oth... See more...
Hello Splunkers,    I'm having a inputput dropdown field, when i'm selecting "*" in that input dropdown field, I need to pass base search 1 to all searches in dashboard, when I'm selecting any other values apart from "*". I need to pass base search 2 to all searches in dashboard. <form version="1.1"> <label>Clone sample</label> <search> <query> | makeresults | eval curTime=strftime(now(), "GMT%z") | eval curTime=substr(curTime,1,6) |rename curTime as current_time </query> <progress> <set token="time_token_now">$result.current_time$</set> </progress> </search> <search id="base_1"> <query> index=2343306 sourcetype=logs* | head 10000 | fields _time index Eventts IT _raw | fillnull value="N/A" </query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <search id="base_2"> <query> index=2343306 sourcetype=logs* | where isnotnull(CODE) | head 10000 | fields _time index Eventts IT CODE _raw | fillnull value="N/A" </query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <fieldset submitButton="false" autoRun="true"> <input type="radio" token="field1"> <label>field1</label> <choice value="All">All</choice> <choice value="M1">M1</choice> <choice value="A2">A2</choice> <change> <eval token="base_token">case("All"="field1", "base_1", "All"!="field1", "base_2")</eval> </change> </input> <input type="time" token="time_token" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <table> <title>table</title> <search base="$base_token$"> <query>| table *</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form> I have tries passing token in input dropdown it dosent work, can you please help me in fixing this issue. Thanks!
Hi everyone,  I am using rex field to extract content that containst the following word: full   | rex field=msg_old "(?<msg_keyword>full).*"   However, what I actually need is to extract content... See more...
Hi everyone,  I am using rex field to extract content that containst the following word: full   | rex field=msg_old "(?<msg_keyword>full).*"   However, what I actually need is to extract content with the word full alone, not words that contain full in between, just the word full itself.  Can you please advise if the sentence needs to be different? Thanks  
Hi,   just wondered if oracle cloud had tagging to onboard data like AWS does for Splunk like this:   splunk add monitor /var/log/secure   thanks