All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My org is pulling in vuln data using the Qualys TA and I am trying to put together a handful of searches and dashboards to see metrics quickly.  I'm using the following currently over the last 30 day... See more...
My org is pulling in vuln data using the Qualys TA and I am trying to put together a handful of searches and dashboards to see metrics quickly.  I'm using the following currently over the last 30 days:       index=qualys sourcetype=qualys:hostDetection SEVERITY=5 STATUS="FIXED" | dedup HOST_ID, QID | eval MTTR = ceiling(((strptime(LAST_FIXED_DATETIME, "%FT%H:%M:%SZ") - strptime(FIRST_FOUND_DATETIME, "%FT%H:%M:%SZ")) / 86400)) ```| bucket span=1d _time``` | timechart span=1d avg(MTTR) as AVG_MTTR_PER_DAY | streamstats window=7 avg(AVG_MTTR_PER_DAY) as 7_DAY_AVG         This gets me close, but I believe this is giving the average of averages, not the overall average. Using the month of May, I wouldn't have a calculated value until May 8th, which would use the data from May 1-7.  May 9th would be from May 2-8, etc.  Any help on how to calculate the overall average?
Hi, I just installed a index cluster and i already know that i shoud place Apps to $SPLUNK_HOME/etc/master-apps/ directoty at my manager node to distribute it accross all indexers but i have 2 questi... See more...
Hi, I just installed a index cluster and i already know that i shoud place Apps to $SPLUNK_HOME/etc/master-apps/ directoty at my manager node to distribute it accross all indexers but i have 2 questions. 1. If an app that I deployed on the indexers uses Python scripts to fetch data, will this data be duplicated? 2. Do I need to prepare an app before deploying it to my indexers (remove unnecessary dashboards, eventtypes, etc)? Or can i leave it without changes?
Hi All, How to resolve the issue when the queues are full in indexer. Kindly let me know.
Hi All, Do we have SPL query to find SPL to identify the line breaking and event breaking issues through search heads.
Hi, Could someone please suggest an alternative product for Splunk Business Flow, as this particular product was deprecated post 2020? If there is no single product that provides the same functional... See more...
Hi, Could someone please suggest an alternative product for Splunk Business Flow, as this particular product was deprecated post 2020? If there is no single product that provides the same functionality, is there a different way to monitoring business flows?  thanks, pradeep. 
trying to use rex to get the contents for the field letterIdAndDeliveryIndicatorMap. For example, Logged string letterIdAndDeliveryIndicatorMap=[abc=P, efg=P, HijKlmno=E] I want to extract the cont... See more...
trying to use rex to get the contents for the field letterIdAndDeliveryIndicatorMap. For example, Logged string letterIdAndDeliveryIndicatorMap=[abc=P, efg=P, HijKlmno=E] I want to extract the contents between the [] , which is abc=P, efg=P, HijKlmno=E and then find stats on them. I was trying something like  rex  field=_raw "letterIdAndDeliveryIndicatorMap=\[(?<letterIdAry>[^\] ]+)" but, its not working as expected. Thanks in advance!  
Hello Team, I have configured splunk forwarder and on which I am getting below error, WARN TcpOutputProc [8204 parsing] - The TCP output processor has paused the data flow. Forwarding to host_des... See more...
Hello Team, I have configured splunk forwarder and on which I am getting below error, WARN TcpOutputProc [8204 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=WALVAU-VIDI-1 inside output group default-autolb-group from host_src=WALVAU-MCP-APP- has been blocked for blocked_seconds=400. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.   Task : I want to send data from Splunk forwarder to Splunk enterprise server ( Indexer ) 1.  I opened outbound port on UF 9997 2. Opened inbound port 9997 on indexer outputs.conf on UF [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = WALVAU-VIDI-1:9997 [tcpout-server://WALVAU-VIDI-1:9997] inputs.conf on UF [monitor://D:\BEXT\Walmart_VAU_ACP\Log\BPI*.log] disabled = false index = walmart_vau_acp sourcetype = Walmart_VAU_ACP Please help me to fix the issue. So that forwarder will send data to Indexer server.  
Hi, Let's say I'm ingesting different types of logs files from different type(some are txt,csv,json,xml....) to the same index. How can I add additional data to each datasource/log? I would like to ... See more...
Hi, Let's say I'm ingesting different types of logs files from different type(some are txt,csv,json,xml....) to the same index. How can I add additional data to each datasource/log? I would like to some extra fields in json format, for example : customers name, system same...
I have downloaded splunk 9.2 rpm file and installed on rhel9.2. While i'm running splunk enable boot-start it's throwing  the error as below.    [root@splunk~]# splunk enable boot-start execve: No ... See more...
I have downloaded splunk 9.2 rpm file and installed on rhel9.2. While i'm running splunk enable boot-start it's throwing  the error as below.    [root@splunk~]# splunk enable boot-start execve: No such file or directory while running command /sbin/chkconfig [root@splunk~]#     Can someone help me on this ?
I would like to extract the results of each test within the logs array by distinct count of serial number. That is, for each serial number i want to extract the test_name and result and plot the pas... See more...
I would like to extract the results of each test within the logs array by distinct count of serial number. That is, for each serial number i want to extract the test_name and result and plot the pass/fail rates of eachindividual test. The test names are the same in each event with only a different serial_number For the raw JSON below there would be 12 tests to extract the results from.  How would i go about this in search?       {"serial_number": "VE7515966", "type": "Test", "result": "Pass", "logs": [{"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Disable UGC USB Comm Watchdog", "result": "Pass"}, {"test_name": "ACR Data Read", "result": "Pass", "received": "ACR model extracted as Test"}, {"test_name": "Thermocouple in Grill", "tc_1": "295", "tc_2": "578", "req_diff": 50, "result": "Pass"}, {"test_name": "Glow Plug in Grill", "m_ugc": "1849", "tol_lower": 0, "tol_upper": 10000, "cond_ugc": "0", "tol_cond": 0, "result": "Pass"}, {"test_name": "Glow Plug Power Toggle", "result": "Pass", "received": "Glow plug power toggled"}, {"test_name": "Glow Plug in Grill", "m_ugc": "2751", "tol_lower": 0, "tol_upper": 10000, "cond_ugc": "0", "tol_cond": 0, "result": "Pass"}, {"test_name": "Motor", "R_rpm_ugc": 3775.0, "R_rpm": 3950, "R_v_ugc": 984.3333333333334, "R_v": 800, "R_rpm_t": 550, "R_v_t": 500, "R_name": "AUGER 640 R", "F_rpm_ugc": 3816.6666666666665, "F_rpm": 3950, "F_v_ugc": 985.3333333333334, "F_v": 800, "F_rpm_t": 550, "F_v_t": 500, "F_name": "AUGER 640 F"}, {"test_name": "Fan", "ugc_rpm": 2117.0, "rpm": 2700, "rpm_t": 800, "ugc_v": 554.3333333333334, "v": 630, "v_t": 200, "result": "Pass"}, {"test_name": "RS 485", "result": "Pass", "received": "All devices detected", "expected": "Devices detected: ['P', 'F']"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "Confirm Grill Size", "result": "Pass", "received": "Grill confirmed as Test"}]}
In the Splunk Synthetic doc, it mentioned, you need to use the following syntax to reference the custom variable you defined in earlier on steps.  {{custom.$variable}} However, when I tried, it see... See more...
In the Splunk Synthetic doc, it mentioned, you need to use the following syntax to reference the custom variable you defined in earlier on steps.  {{custom.$variable}} However, when I tried, it seems that syntax only works when you try to reference it in the "Fill in Field" sort of steps.   If I have a downstream javaScript step, and in that javaScript,  I want to reference the custom variable I defined in my earlier on steps,  how do I reference them? I tried the  {{custom.$variable}} syntax, they do NOT work in my javaScript
Hello, I am getting the below error when i attempt to execute the process of creating a secret storage in /opt/splunk/bin [root@ba-log bin]# splunk rdrand-gen -v -n 1M -m reseed_delay -o /opt/spl... See more...
Hello, I am getting the below error when i attempt to execute the process of creating a secret storage in /opt/splunk/bin [root@ba-log bin]# splunk rdrand-gen -v -n 1M -m reseed_delay -o /opt/splunk/rdrand.bin Passphrase that unlocks secret storage: /usr/bin/env: ‘python’: No such file or directory /opt/splunk/etc/system/bin/gnome_keyring.py: process returned non-zero status: status=done, exit=127, stime_sec=0.005357, max_rss_kb=13716, vm_minor=216, sched_vol=1, sched_invol=7 Error accessing secret storage.   permission of file: -rwxr-xr-x. 1 root root 42392 Jan 6 2023 /usr/bin/env I verified with splunk envvers and the path to /usr/bin is in there. I run the splunk command as root user, so why isn't splunk able to read the /usr/bin/env file?
How to add a dummy row to the table in the Splunk dashboard. We are receiving 2 files everyday 4 times in between 6-7:30AM, 11-12:30 PM, 6-7:30PM, 9-10:05PM. I need output like below if received on... See more...
How to add a dummy row to the table in the Splunk dashboard. We are receiving 2 files everyday 4 times in between 6-7:30AM, 11-12:30 PM, 6-7:30PM, 9-10:05PM. I need output like below if received one file means has to display like missing other file. Using | makeresults command we can create a row but it is applicable while calculating the timings. Input :  File  Date TI7L 03-06-2024 06:52 TI7L 03-06-2024 06:55 TI8L 03-06-2024 11:51 TI8L 03-06-2024 11:50 TI9L 03-06-2024 19:06 TI9L 03-06-2024 19:10 TI5L 03-06-2024 22:16 TI5L 03-06-2024 22:20     Output:  File  Date TI7L 03-06-2024 06:52 Missing file Missing file TI8L 03-06-2024 11:50 TI9L 03-06-2024 19:06 Missing file TI5L 03-06-2024 22:16 Missing file
Hi, I tried to build Splunk environment with 1 SH and indexer cluster with 2 pears + manager node. When I go to Monitoring console -> Settings -> General Setup it shows me only my SH and pears withou... See more...
Hi, I tried to build Splunk environment with 1 SH and indexer cluster with 2 pears + manager node. When I go to Monitoring console -> Settings -> General Setup it shows me only my SH and pears without manager node But when I go to Distributed environment I can see my indexer manager configured I did something wrong or it should not be displayed in General Setup menu?
I try to import into the Observability platform, but I fail to follow your documentation. This page, https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/org-tokens.ht... See more...
I try to import into the Observability platform, but I fail to follow your documentation. This page, https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/org-tokens.html#admin-org-tokens, says Settings - Access Tokens exists, but it doesn't. (My home page https://prd-p-a9b9x.splunkcloud.com/en-US/manager/splunk_app_for_splunk_o11y_cloud/authentication/users). Settings - Tokens exists, but it doesn't create tokens with scopes. I don't know if that's a documentation error or an application error. I then tried running the code at https://docs.splunk.com/observability/en/gdi/other-ingestion-methods/rest-APIs-for-datapoints.html#start-sending-data-using-the-api, which says I need a realm. And a realm can be found at "your profile page in the user interface". But it's not in User Settings and it's not in Settings - User Interface. Your documentation doesn't seem to match your application. Am I on the wrong page, or your docs years out of date? Please help.
index="intau" host="server1" sourcetype="services_status.out.log" service="HTTP/1.1" status=* | chart count by status | eventstats sum(count) as total | eval percent=100*count/total | eval percent=ro... See more...
index="intau" host="server1" sourcetype="services_status.out.log" service="HTTP/1.1" status=* | chart count by status | eventstats sum(count) as total | eval percent=100*count/total | eval percent=round(percent,2) | eval SLO =if( status="200","99,9%","0,1%") | where NOT (date_wday=="saturday" AND date_hour >= 8 AND date_hour < 11) | fields - total count   I have the above Query and the above result , how can i combine 502 and 200 results to show our availability excluding maintenance time of 8pm to 10pm every Saturday, how can i make it look like the drawing I produced there
Hello,  I receive an event of the following format: { log: { 'trace_id': 'abc', 'request_time': '2024-06-04 10:49:56.470140', 'log_type': 'DEBUG', 'message': 'hello'} } Is it possible to extract f... See more...
Hello,  I receive an event of the following format: { log: { 'trace_id': 'abc', 'request_time': '2024-06-04 10:49:56.470140', 'log_type': 'DEBUG', 'message': 'hello'} } Is it possible to extract from all the events I receive the inner JSON? * each key in the inner json will be a column value but the me
I have a small query that splits events depending on a multivalue field and each of n's date from the multivalue needs to become the _time of n's "collected" row.   index=test source=test | eval fo... See more...
I have a small query that splits events depending on a multivalue field and each of n's date from the multivalue needs to become the _time of n's "collected" row.   index=test source=test | eval fooDates=coalesce(fooDates, foo2), fooTrip=mvsort(mvdedup(split(fooDates, ", "))), fooCount=mvcount(fooTrip), fooValue=fooValue/fooCount | mvexpand fooTrip | fields - _raw | eval _time=strptime(fooTrip, "%F") | table _time VARIOUS FIELDS | collect index=test source="fooTest" addtime=true   The ouput table view is exactly what i'm expecting, but when i search for these fields on new source, they have today time (or, with addtime=false, earliest from the time picker). Also using testmode=true, i still see results as supposed to be. What's wrong? Thanks 
Hello, Here I have a small picture of how the environment is structured: Red arrow -> Source Splunk TCP (Cribl Stream)   I'm trying to forward the journald data from the Splunk Universal Forw... See more...
Hello, Here I have a small picture of how the environment is structured: Red arrow -> Source Splunk TCP (Cribl Stream)   I'm trying to forward the journald data from the Splunk Universal Forwarder to the Cribl Worker (Black to blue box). I have configured the forwarding of the journald data using the instructions from Splunk. (Get data with the Journald input - Splunk Documentation)   I can forward the journald data and it also arrives at the cribl worker. Problem: the cribl worker cannot distinguish the individual events from the journald data or does not know when a single event is over and thus combines several individual events into one large one. The Cribl Worker always merges about 5-8 journald events. (I have marked the individual events here. However, they arrive as such a block, sometimes more together, sometimes less.) Event 1: Invalid user test from 111.222.333.444port 1111pam_unix(sshd:auth):check pass; userunknownpam_unix(sshd:auth):authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.222.333.444Failed password forinvalid user testfrom 111.222.333.444 port1111 ssh2error: Received disconnect from 111.222.333.444port 1111:13: Unableto authenticate [preauth]Disconnected from invaliduser test 111.222.333.444port 1111 [preauth]   What I tested: If I have the journald data from the universal forwarder not forwarded via a cribl worker, but via a heavy forwarder (The blue box in the picture above is then no longer a Cribl Worker but a Splunk Heavy Forwarder), then the events are individual and easy to read. Like this: Event 1:   Invalid user testfrom 111.222.333.444 port1111   Event 2:   pam_unix(sshd:auth):check pass; userunknown   Event 3:   pam_unix(sshd:auth):authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.222.333.444   Event 4:   Failed password forinvalid user testfrom 111.222.333.444 port1111 ssh2   Event 5:   error: Received disconnectfrom 111.222.333.444 port1111:13: Unable toauthenticate [preauth]   Event 6:   Disconnected from invaliduser test 111.222.333.444port 1111 [preauth]   -------------------------------- I'm looking for a solution that I can send the journald data as shown in the figure above, but the journald data will be sent as in the second case. Thanks in advance for your help.
Anyone know of any examples on SplunkBase that have JavaScript-written commands using the Python SDK? I’ve written about a dozen custom commands using Python and a familiar with that process. The d... See more...
Anyone know of any examples on SplunkBase that have JavaScript-written commands using the Python SDK? I’ve written about a dozen custom commands using Python and a familiar with that process. The dev docs suggest the Splunk SDK for Python should be used for .JS commands but I’m not understanding how that’s possible without importing libraries like Flask. https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/nonpythonscscs