All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to generate threshold breaches health rule report by using API for certain month
I am looking for SPL which we can check the who can update the whitelist in lookup table and also the what changes are done , compare with previous one.   Thanks, Sahil
I have created a Splunk dashboard in which a panel consists of a table. The table has multiple columns wherein one column values consists of URL. The URL is clickable. I have used the following piece... See more...
I have created a Splunk dashboard in which a panel consists of a table. The table has multiple columns wherein one column values consists of URL. The URL is clickable. I have used the following piece of code to make it clickable. <drilldown> <condition field="abc"> <link target="_blank">$row.abc|n$</link> </condition> <condition field="*"></condition> </drilldown> Now, the problem is that the entire row is highlighted in blue and when I hover the mouse over any column, it gives an impression that the field value is clickable when it is not.  I want all the column values to stay in black and unselected. Only the URL values shall remain highlighted.
I have a field in my database datamodel called 'os.user'. And I have a lookup called 'userAccount'.  'userAccount' look up has a field called 'user' that is same as the 'os.user' field of database DM... See more...
I have a field in my database datamodel called 'os.user'. And I have a lookup called 'userAccount'.  'userAccount' look up has a field called 'user' that is same as the 'os.user' field of database DM. I want to know if all of my 'os.user' values are present in the 'userAccount' lookup.   My requierment is to know if my lookup is sufficient and contains all the user values 'os.user'. Could use some guisance on the SPL
Hello all,  since our Update to Splunk Enterprise 9.0.2 we experienced, that the Dashboard colors (Simple XML) changed completely. And the new colors are terrible! Did someone experience something ... See more...
Hello all,  since our Update to Splunk Enterprise 9.0.2 we experienced, that the Dashboard colors (Simple XML) changed completely. And the new colors are terrible! Did someone experience something similar after the update? And if yes: Were you able to get the colors back to the way they were? On 8.2.2.3   On 9.0.2 I would appreciate every hint. The new colors are something that cannot be presented to management  Thanks and best Regards  
I have a dashboard in which there is a Pie chart like below  I need help in this way that it has to show a label of event count and also the color details like green for success, blue for runnin... See more...
I have a dashboard in which there is a Pie chart like below  I need help in this way that it has to show a label of event count and also the color details like green for success, blue for running, red for error, Orange for Wait. It has to mention the details and also the event count. Also need help on the below issue for the bar chart.   In the above chart there are multiple columns with different colors for a single day. For this there should be a single column with different colors for single. Can someone please help me out on this.  
Hi, I'm create search query to monitor when 3 users create accounts in an hour: index=* sourcetype="WinEventLog:Security" EventCode=4720 | stats count as total_accounts by host | where total_a... See more...
Hi, I'm create search query to monitor when 3 users create accounts in an hour: index=* sourcetype="WinEventLog:Security" EventCode=4720 | stats count as total_accounts by host | where total_accounts >=3 | timechart span=1h sum(total_accounts) | eval time_range=timeRange("YYYY-MM-DD hh:mm:ss", "<start-time>", "<end-time>")  
Hi folks looking for some expert opinion. my logs contains many diff files. I want to capture the start and end time for each file  the logs looks like this timestamp 202301_filex_a_b.z started... See more...
Hi folks looking for some expert opinion. my logs contains many diff files. I want to capture the start and end time for each file  the logs looks like this timestamp 202301_filex_a_b.z started execution timestamp 202301_filex_a_b.z finished execution timestamp 202301_filey_e_f.z started execution timestamp 202301_filey_e_f.z finished execution The output would look something like filex | start timestamp | end timestamp | duration filey | start timestamp | end timestamp | duration I was able to do write diff search for start and end and then join them on the filename, but wondering if there is a better way to do it  
There  are five different hosts on our fleet on two different timezones with four sourcetypes on each. The problem is that the time that is being shown in Splunk Cloud isn't always the timestamp from... See more...
There  are five different hosts on our fleet on two different timezones with four sourcetypes on each. The problem is that the time that is being shown in Splunk Cloud isn't always the timestamp from the logs. They are different.  The hosts pass the data through an intermediate forwarder (universal forwarder running inside) which is in UTC.  There's also cases where one sourcetype from one host shows up/parses the correct time format but when they are coming from a different source, it doesn't.  I'll explain below: Five different hosts - host_A (MST), host_B (MST), host_C (UTC), host_D (UTC), host_E (UTC) Four different source types - src_W, src_X, src_Y, src_Z For host_A (MST) and host_B (MST), src_W is shown at the correct time. src_X and src_Y are not. For example - if src_X and src_Y have the timestamp of 05/02/2022 14:xx:xx. in splunk, it shows as 04/02/2022 7:xx:xx.  Between these two, src_Z only comes from host_A and the timestamp of 05/02/2022 14:xx:xx. in splunk, it shows as 04/02/2022 9:xx:xx.  For host_C (UTC) - if src_W and src_X have the timestamp of 05/02/2022 21:xx:xx. in splunk, it shows as 04/02/2022 2:xx:xx. host_C doesn't have Y and Z.  For host_D (UTC) - if src_Y has the timestamp of 05/02/2022 21:xx:xx. in splunk, it shows as 04/02/2022 2:xx:xx. host_D doesn't have the other sourcetypes. For host_E (UTC) - if src_Y has the timestamp of 05/02/2022 21:xx:xx. in splunk, it shows as 04/02/2022 2:xx:xx. host_E doesn't have the other sourcetypes. For src_Z timestamp of 05/02/2022 14:xx:xx. in splunk, it shows as 04/02/2022 9:xx:xx - just like in host_A. Sorry this might seem to be very complicated and it is in MST and not PST like I said before. My Splunk Cloud instance is also set to MST.  Below is how the log formatting looks like: This is how log from src_W is:  eni=xx.yy.zz.aa client_ip=- - - [05/Feb/2023:17:46:53 -0700] ... ... .... This is how log from  src_X is: DEBUG 2023-02-06 00:49:22 ... ... ... This is how log from src_Y is:  INFO 2023-02-06 00:50:02 ... ... ... This is how log from src_Z is: qwertyui Sun Feb 5 04:40:39 2023: Thank you for the help!
Hey everyone,   I'm at a loss for what this is, I always get stuck at install step 27 and then it throws these errors at me but I can't figure out what it is and how to fix it.   I followed t... See more...
Hey everyone,   I'm at a loss for what this is, I always get stuck at install step 27 and then it throws these errors at me but I can't figure out what it is and how to fix it.   I followed these steps https://docs.splunk.com/Documentation/SOARonprem/5.5.0/Install/InstallUnprivileged   Its being run on Red Hat Linux 7 over googles GCP.   I've attached a photo of the errors.   Any help is appreciated.  
Hi,    I am trying to use splunk rest api to call the logs to do some dashboarding in our external application.  There will be a java middle ware that will call these api and response will be p... See more...
Hi,    I am trying to use splunk rest api to call the logs to do some dashboarding in our external application.  There will be a java middle ware that will call these api and response will be parsed by the UI. But when i call the splunk rest api it returns multiple json records but not as a list. Just seperate json records , It will be troublesome to parse it as its not  a list . How do we make sure the response from splunk rest api is just 1 valid json that can be parsed?    The screen shows the query and response from postman. How do we get a single json response from Splunk that has these json results as a list that can be parsed  easily by a program
Dears, We have two fields in the one index, we need to compare two fields then create a new field to show only on it the difference between two fields. Below one of example from the results from ... See more...
Dears, We have two fields in the one index, we need to compare two fields then create a new field to show only on it the difference between two fields. Below one of example from the results from two fields:   current_conf field: _Name:REQ000004543448-4614240-shrepoint previous_conf field: _Name:REQ000004543448-shrepoint   Please your support.
Dears, am now want to deploy splunk on our organization.   is there any videos or documentation for that!  I need to be sure that the splunk collect data from everywhere !   thank you! ... See more...
Dears, am now want to deploy splunk on our organization.   is there any videos or documentation for that!  I need to be sure that the splunk collect data from everywhere !   thank you!    
Need a help folks. I am trying to create a dashboard where i have 1500 values that i need to put it as a drop down. I am using dashboard studio's drop down input. But unfortunately i am not able to l... See more...
Need a help folks. I am trying to create a dashboard where i have 1500 values that i need to put it as a drop down. I am using dashboard studio's drop down input. But unfortunately i am not able to list more than 1000 values. Need help to either list all that 1500 values in that drop down input or a logic how can i split these as two drill downs?   Please help folks.
Hi Team,  I 'm new to Splunk and need little guidance with fixing errors that occurred when I uploaded a directory < .var/log >--from ubuntu to monitor  ------------------------------------------... See more...
Hi Team,  I 'm new to Splunk and need little guidance with fixing errors that occurred when I uploaded a directory < .var/log >--from ubuntu to monitor  ------------------------------------------------------------------------------------------------------------------------------- Health Status of Splunkd    Real-time Reader-0 Root Cause(s): The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Generate Diag?More infoIf filing a support case, click here to generate a diag. Last 50 related messages: 02-04-2023 20:02:25.936 -0800 WARN TailReader [4979 tailreader0] - Could not send data to output queue (parsingQueue), retrying... 02-04-2023 20:02:25.910 -0800 WARN TailReader [4980 batchreader0] - Could not send data to output queue (parsingQueue), retrying... 02-04-2023 20:02:20.904 -0800 WARN TailReader [4979 tailreader0] - Enqueuing a very large file=/var/log/auth.log.1 in the batch reader, with bytes_to_read=9885261283, reading of other large files could be delayed 02-04-2023 20:02:20.875 -0800 INFO TailReader [4979 tailreader0] - Ignoring file '/var/log/wtmp' due to: binary 02-04-2023 20:02:19.846 -0800 INFO TailReader [4966 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 02-04-2023 20:02:19.846 -0800 INFO TailReader [4966 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 02-04-2023 20:02:19.844 -0800 INFO TailReader [4980 batchreader0] - batchreader0 waiting to be un-paused 02-04-2023 20:02:19.844 -0800 INFO TailReader [4980 batchreader0] - Starting batchreader0 thread 02-04-2023 20:02:19.844 -0800 INFO TailReader [4980 batchreader0] - Registering metrics callback for: batchreader0 02-04-2023 20:02:19.844 -0800 INFO TailReader [4979 tailreader0] - tailreader0 waiting to be un-paused 02-04-2023 20:02:19.844 -0800 INFO TailReader [4979 tailreader0] - Starting tailreader0 thread 02-04-2023 20:02:19.844 -0800 INFO TailReader [4979 tailreader0] - Registering metrics callback for: tailreader0 splunkd Data Forwarding   File Monitor Input Forwarder Ingestion Latency Ingestion Latency Large and Archive File Reader-0 Real-time Reader-0 Index Processor   Resource Usage   Workload Management  
Numeral system macros for Splunk v1.1.1 Bytes to printing Human readable size (e.g. 4KiB, 1023.4MiB, 23.4GiB, 345,67TiB) Sometimes it is necessary to divide Bytes by powers of 1024 and convert it t... See more...
Numeral system macros for Splunk v1.1.1 Bytes to printing Human readable size (e.g. 4KiB, 1023.4MiB, 23.4GiB, 345,67TiB) Sometimes it is necessary to divide Bytes by powers of 1024 and convert it to human readable units. In that case, it would not look good if you write that calculation in the SPL each time and the SPL becomes long, so I think it would be good if we could use a common macro to make it simple. For this purpose, I added 2 macros to Numeral system macros for Splunk v1.1.1. numeral_binary_symbol(bytes) Binary symbol. KiB, MiB, GiB, TiB, PiB, EiB, ZiB, YiB, RiB, QiB numeral_binary_symbol(bytes,digits) Binary symbol with arg for rounding digits. For other macros provided, click here Usage1 | makeresults count=35 ```THIS SECTION IS JUST CREATING SAMPLE VALUES.``` | streamstats count as digit | eval val=pow(10,digit-1), val=val+random()%val | foreach bytes [eval <<FIELD>>=val] | table digit val bytes | fieldformat val=tostring(val,"commas") ```THE FOLLOWING LINES MAY BE WHAT ACHIEVES THE FORMAT YOU ARE LOOKING FOR.``` | fieldformat bytes=printf("% 9s",`numeral_binary_symbol(bytes,1)`)   Usage2 Example of sorting sourcetypes in descending order of throughput. index="_internal" source="*metrics.log" per_sourcetype_thruput | stats sum(eval(kb*1024)) AS bytes by series ```THE FOLLOWING LINES MAY BE WHAT ACHIEVES THE FORMAT YOU ARE LOOKING FOR.``` | fieldformat bytes=printf("% 10s",`numeral_binary_symbol(bytes,2)`) | sort 0 - bytes Points :  The internal value is still in Byte. Sortable. The kb information can be converted to bytes and a common macro can be used. Since the fieldformat retains the original value internally, the MiB and KiB displays can also be used for sorting, with the values being comparable. Why weird units KiB, MiB using instead of KB, MB? As a side note, in the general public, the definition of “kilo” is 1000 and has no other meaning, but in the computer world, it has long been a common belief that KB (Killo Byte) is 1024 bytes to the 10th power of 2, as if it were common knowledge in the industry. However, this is definitely a source of confusion, so standards such as IEC 60027-2, IEEE 1541-2002, and IEC 80000-13:2008 defines the KiB (Kibibyte) and MiB (Mebibyte) units as byte units based on 1024 bytes to avoid confusion. These units are not at all widespread and unfamiliar to us, but since confusion over numbers is a source of misunderstanding, I dared to use these units in that macros in order to avoid misunderstanding and to have a common understanding in Splunk’s output. Enjoy Splunking!    
I have 40 Windows 2012 domain controllers (forwarding through heavy forwarders to cloud), that intermittently stop sending  "WinEventLog:Security" events to cloud indexers. In some cases, one of the ... See more...
I have 40 Windows 2012 domain controllers (forwarding through heavy forwarders to cloud), that intermittently stop sending  "WinEventLog:Security" events to cloud indexers. In some cases, one of the servers will send Security events for a few hours and then stop sending altogether. I know the events exist on the server because I can see them through Event Viewer. On the other hand, I don't have the same issue with the Application or System events. They flow all the time. The issue only happens with "WinEventLog:Security" events. So far, I have tried to split the load among 4 heavy forwarders, thinking it was a forwarder congestion issue. I also configured the domain controllers to send directly cloud, bypassing the heavy forwarders. Alas, no success.  Has anyone experienced or heard about this issue? Thank you.
I am attempting a lab for a class, following the lab's instructions I have come to an loop where I have set up Asset Discovery, and enabled the inputs, but the app will not return any information. An... See more...
I am attempting a lab for a class, following the lab's instructions I have come to an loop where I have set up Asset Discovery, and enabled the inputs, but the app will not return any information. Any way I can think to access the app it takes me directly to the continue app setup page. Did I miss something in the setup? I have restarted splunk itself, but not the ubuntu server. Should I wait longer for nmap to finish?
Hey All,    I'm really struggling here.  I'm trying to get a universal forwarder to pull in txt logs, and edit the "host" field based on the filename/file path. Example file path: C:\SCAP_SCA... See more...
Hey All,    I'm really struggling here.  I'm trying to get a universal forwarder to pull in txt logs, and edit the "host" field based on the filename/file path. Example file path: C:\SCAP_SCANS\Sessions\2023-02-04_1200\SERVER-test_SCC-5.7_2023-02-04_111238_Non-Compliance_MS_Windows_10_STIG-2.7.1.txt   Inputs.conf stanza: [monitor://C:\SCAP_SCANS\Sessions] disabled = false ignoreOlderThan = 90d host_regex = [^\\\]+(?=_SCC) SHOULD_LINEMERGE = true MAX_EVENTS = 500000 index = main source = SCC_SCAP_TXT sourcetype = SCC_SCAP_TXT whitelist = (Non-Compliance).*\.(txt)   Tried a few different regex's.  Checked btool to make sure there aren't any configs overwriting settings.  Tried with and without transforms and props files.  Verified regex works using the path and a makeresults query. Anyone have any suggestions?
Here is the original table here, but I need to put some dummy data into Field_B  Time Filed_A Field_B 1 10 Tom 2 20 Smith 3 30 Will 4 40 Sam ... See more...
Here is the original table here, but I need to put some dummy data into Field_B  Time Filed_A Field_B 1 10 Tom 2 20 Smith 3 30 Will 4 40 Sam Like this, Time Filed_A Field_B 1 10 DUMMY1 2 20 DUMMY2 3 30 Tom 4 40 Smith I want to expect the order of Filed_B will be : DUMMY1,DUMMY2,Tom,Smith,Will,Sam... Please advise me on how to write the eval command to do this...