All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I hope you are doing well. I have a request for your support. I have multiple reports that I need to save locally in a folder on the server where Splunk is hosted. I would greatly ap... See more...
Hello everyone, I hope you are doing well. I have a request for your support. I have multiple reports that I need to save locally in a folder on the server where Splunk is hosted. I would greatly appreciate your assistance in validating the process. Thank you very much.
Hi there ,  I created a splunk dashboard (classic) , which I want to download/export as PDF. However , I am unable to do same as trellis are not supported with PDF export. Also when I try to print/e... See more...
Hi there ,  I created a splunk dashboard (classic) , which I want to download/export as PDF. However , I am unable to do same as trellis are not supported with PDF export. Also when I try to print/export - the dashboard widgets/panel's look gets hampered. Hence , I need help to explore the best way by which I can download dashboard view as same shown in classic view dark theme , kind of a snap view or in an image format sothat all graphs , look n feel will intact same. Also is it possible to schedule sending email about the same downloaded dashboard image.
Hi Splunk Experts, I want to search for a word and then print the current matching line & the immediate next line. Kindly assist. Thanks in advance!!   Note: My events are Single-Line events.
I'd  love to be able to dynamically adjust the timespan in  a sparkline, as in   ...| eval timespan=tostring(round((now()-strptime("2023-07-26T09:45:06.00","%Y-%m-%dT%H:%M:%S.%N"))/6000))+"m" | ... See more...
I'd  love to be able to dynamically adjust the timespan in  a sparkline, as in   ...| eval timespan=tostring(round((now()-strptime("2023-07-26T09:45:06.00","%Y-%m-%dT%H:%M:%S.%N"))/6000))+"m" | chart sparkline(count,timespan) as Sparkline, count by src_ip   However, sparklines do not accept timespans in string format, and the example above results in the following error message:   Error in 'chart' command: Invalid timespan specified for sparkline.   Any suggestions? I see that this question was asked back in 2019, but I couldn't find the answer.
In my previous post I was advised to deploy Windows TA via Deployment Server which I did and the app is installed on the servers I want. However the issue is I deployed the app and there is no inform... See more...
In my previous post I was advised to deploy Windows TA via Deployment Server which I did and the app is installed on the servers I want. However the issue is I deployed the app and there is no information being forwarded to the server with Windows events. Both client and server are able to communicate with one another and the default port for Splunk is open on 9997/tcp. I have edited the inputs.conf file in both the app and the actual SplunkForwarder local folder. I have set the various logs I want in the inputs file to disabled = 0 and still no data comes through to my indexes.
Hi Everyone,Could you please help me break below events  Expected Events: Subject : ABCD FriendlyName : ABCD Issuer : ABCD Thumbprint : 3CBB2CACD16 NotAfter : 2025 Expires in (Days) : 0 ForSp... See more...
Hi Everyone,Could you please help me break below events  Expected Events: Subject : ABCD FriendlyName : ABCD Issuer : ABCD Thumbprint : 3CBB2CACD16 NotAfter : 2025 Expires in (Days) : 0 ForSplunk : Break Events which is getting received: NotAfter : 2025 Expires in (Days) : 0 ForSplunk : Break Subject : ABCD FriendlyName :ABCD Issuer : ABCD Thumbprint :3CBB2CACD16 Subject : ABCD FriendlyName :ABCD Issuer : ABCD Thumbprint : 3CBB2CACD16 NotAfter : 2025 Expires in (Days) : 68 ForSplunk : Break I want my Events to break after FOR SPLUNK : BREAK but its creating issue for some of the events and not for all.I dont know why its working in some cases and not working in some of the cases.   This is there in my props.conf [MY-SOURCETYPE] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = custom pulldown_type = 1 TIME_FORMAT = %Y-%m-%d_%H:%M:%S_%p SHOULD_LINEMERGE = true MUST_BREAK_AFTER = Break disabled = false
Hello, I don't find the minimum supported version of the Splunk Universal Forwarder for indexer discovery capability. Can you help me? Thanks in advance.
I have some questions regarding data trim. From which version  data trim has been added? What is the parameter  to trim the data like how much storage used be filled in order  to do the data trim? ... See more...
I have some questions regarding data trim. From which version  data trim has been added? What is the parameter  to trim the data like how much storage used be filled in order  to do the data trim? Can we stop data trim? or how can we know that data is about to get trim    
 My base search  PAGE_ID=* | where PAGE_ID=DGEFH  OR  PAGE_ID =RGHJH  NOT NUM_OF_MONTHS_RUN>=6 AND NOT NUM_OF_INDIVIDUALS_ON_CASE>=4 | eventstats perc99(TRAN_TIME_MS) as Percentile by PAGE_ID | eva... See more...
 My base search  PAGE_ID=* | where PAGE_ID=DGEFH  OR  PAGE_ID =RGHJH  NOT NUM_OF_MONTHS_RUN>=6 AND NOT NUM_OF_INDIVIDUALS_ON_CASE>=4 | eventstats perc99(TRAN_TIME_MS) as Percentile by PAGE_ID | eval timeinsecs= round((TRAN_TIME_MS/1000),2) | stats count(eval(timeinsecs <=8)) AS countofpases count(timeinsecs) as totalcount by PAGE_CATEGORY | eval sla= (countofpases/totalcount)*100 | table sla   I wanted to include all the PAGE_ID and the also use the criteria for the PAGE_ID=DGEFH  and  PAGE_ID =RGHJH  
Hello, everyone! I have search, which ends in such way ... | table id, name | outputlookup my_lookup.csv so my search get such results id name 1 John 2 Mark ... See more...
Hello, everyone! I have search, which ends in such way ... | table id, name | outputlookup my_lookup.csv so my search get such results id name 1 John 2 Mark 3 James Now, I want to record only NEW id's from search  to lookup, which weren't there Is it possible to make without reworking search?
Hi, I’m trying to monitor changing log files within directories that change regularly. These log files are 7 layers deep on a Netapp share. I’m setting up the monitor stanza in inputs.conf on a Linux... See more...
Hi, I’m trying to monitor changing log files within directories that change regularly. These log files are 7 layers deep on a Netapp share. I’m setting up the monitor stanza in inputs.conf on a Linux box. I have tried everything but only certain directories at the 3rd layer are being monitored, but not others at that same layer, including the one I need. I’ve added recursive = true and tried all variations of syntax with no luck. All permissions are the same as directories that can be monitored. What am I doing wrong? Thanks in advance.
I know the data is there, and that this question is possible through the Chargeback app - but has anyone performed SPL query of their environment to be able to predict, based on current ingest rates,... See more...
I know the data is there, and that this question is possible through the Chargeback app - but has anyone performed SPL query of their environment to be able to predict, based on current ingest rates, and retention policies, what my index sizes will be in my DDAS storage?  I am trying to develop a good understand of where I should be topping out, after events age out and move to DDAA storage.
Hello what is the capability so the user will be able to upload file with "add data" option ?
Hi All,I am running a dashboard which returns the total count(stats count) of field mentioning Severity=ok or Severity=Critical. The requirement is if atealst one field value is Severity=Critical, ... See more...
Hi All,I am running a dashboard which returns the total count(stats count) of field mentioning Severity=ok or Severity=Critical. The requirement is if atealst one field value is Severity=Critical, the color of the panel should turn to Red otherwise Green when Severity=Ok.   Can someone please suggest.
Hello I have sources that contain white spaces and I wand to count them What is the regex to find all the sources with spaces ?   Thanks
As mentioned in https://advisory.splunk.com/advisories/SVD-2023-0606 under "Mitigations and Workarounds"  users can protect themselves from log injections via ANSI escape characters in general, by d... See more...
As mentioned in https://advisory.splunk.com/advisories/SVD-2023-0606 under "Mitigations and Workarounds"  users can protect themselves from log injections via ANSI escape characters in general, by disabling the ability to process ANSI escape codes Above statement is very generic statement and we cannot find any articles in internet about how to do the same. Take for e.g SSH putty is  generally used by most users. Above statement says to disable process of ANSI escape codes from this terminal app ? If that is the case. Where can we find the disabling ANSI escape codes documentation This is just one generic example As this is mitigation and workaround we also need to be carefully weigh Pros/Cons of it Please correct me If my understanding is wrong ?   Any help/pointers in making us understand the above point will be of great help  
i have a problem with the timestamp when i parsing the data, i want the date to start with 28/04/2023 and end with 03/05/2023 but it start with 30/04 then 29/04 and end with 28/04, who can i start th... See more...
i have a problem with the timestamp when i parsing the data, i want the date to start with 28/04/2023 and end with 03/05/2023 but it start with 30/04 then 29/04 and end with 28/04, who can i start the data with 28/04 and not 30/04
Receiving below errors, can someone help with the solution: Streamed search execute failed because: Error in 'lookup' command: Script execution failed for external search command '/splunk/var/run/s... See more...
Receiving below errors, can someone help with the solution: Streamed search execute failed because: Error in 'lookup' command: Script execution failed for external search command '/splunk/var/run/searchpeers/xxxx/apps/utbox/bin/ut_shannon.py'
Hello, we plan to try Kafka as data collector and we'd like to know if we should keep our HF to receive HEC inputs for Kafka data or send directly to indexers, about 200-300gb per day? Looks like... See more...
Hello, we plan to try Kafka as data collector and we'd like to know if we should keep our HF to receive HEC inputs for Kafka data or send directly to indexers, about 200-300gb per day? Looks like HF is better for filtering before indexing. Thanks.  
Hi, I would like to know if it is possible to perform a search in Splunk to find out if "rex" is used in all my dashboard searches. Kind regards Marta