All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello team ! After unsuccessful research on the Internet / Splunk doc, I am turning to you for my question: - Let's say I have 50 alerts in a single app, that are all stored in my file $SPLUNK_HO... See more...
Hello team ! After unsuccessful research on the Internet / Splunk doc, I am turning to you for my question: - Let's say I have 50 alerts in a single app, that are all stored in my file $SPLUNK_HOME$/etc/apps/<appname>/default/savedsearches.conf. - For version control / code management, I want to split this single savedsearches.conf into multiples savedsearches.conf files so that developers can work with a folder directory looking like this: | default | | - | alerts | | - | - | category_1_alerts | | - | - | category_1_alerts | savedsearches.conf | - | - | category_2_alerts | | - | - | category_2_alerts | savedsearches.conf ... - I tried without success on my Splunk instance. I don't know if it is possible, and if it this, I don't know if there are some statements to make in code (e.g. #include <filename>) Have a nice day PS :  In my version control / code management tool, I can always resort to concatenating all my files together when packaging Splunk code if I don't manage to find a better answer.
Currently I am using below query to extract the list of employee_ID column has less then 9 digit employee ID.  However, I have another requirement in same table to extract the employee ID with alphan... See more...
Currently I am using below query to extract the list of employee_ID column has less then 9 digit employee ID.  However, I have another requirement in same table to extract the employee ID with alphanumeric- (like N0001234, etc) and any special characters.  So overall we need data which is less than 9 digits, more than 9 digits, any alphanumeric characters, special characters. index=QQQQQ sourcetype="XXXXX*" source=TTTTTT Extension="*" MSID="*" Employee_Active="*" Employee_Id=* last_name="*" first_name="*"| rename Extension as DN| dedup Employee_Id | eval emplength=len(Employee_Id)| stats count by DN, MSID, Employee_Active, emplength,Employee_Id, last_name, first_name| where emplength>9 ] | table DN, MSID, Employee_Active, emplength,Employee_Id, last_name, first_name
I have a query that returns 2 values("A" and "B"), and a want to make a dinamic field exibition.  With "A" is bigger then "B" show "A" in green, with "A" is lower then "B" show in red. i manage to cr... See more...
I have a query that returns 2 values("A" and "B"), and a want to make a dinamic field exibition.  With "A" is bigger then "B" show "A" in green, with "A" is lower then "B" show in red. i manage to create dash's like that but with fixables values , in my case "B" is dynamic and came in the same query then "A"(using 2 query woud be fine with me too). how i can do that?     { "type": "splunk.singlevalue", "options": { "majorColor": "> majorValue | rangeValue(majorColorEditorConfig)" }, "dataSources": { "primary": "ds_N2TXpjLO" }, "context": { "majorColorEditorConfig": [ { "value": "#D41F1F", "to": "B" }, { "value": "#118832", "from": "B" } ] }, "showProgressBar": false, "showLastUpdated": false }  
When upgrading the Universal Forwarder using the .tgz on Mac OS , a pop up appears and states the following: The "DeRez" command requires the command line developer tools. Would you like to install... See more...
When upgrading the Universal Forwarder using the .tgz on Mac OS , a pop up appears and states the following: The "DeRez" command requires the command line developer tools. Would you like to install the tools now? If 'cancel' is selected, it appears not to affect anything, but I am unsure why this is happening. This appears to be happening when migrating the configuration when upgrading a Splunk UF version on Mac OS. What is the "DeRez" command and what is not being migrated when this is happening?   Thanks!     -- Migration information is being logged to '/Applications/splunkforwarder/var/log/splunk/migration.log.2023-02-01.10-15-52' -- This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y Migrating to: VERSION=9.0.2 BUILD=17e00c557dc1 PRODUCT=splunk PLATFORM=Darwin-universal It seems that the Splunk default certificates are being used. If certificate validation is turned on using the default certificates (not-recommended), this may result in loss of communication in mixed-version Splunk environments after upgrade. "/Applications/splunkforwarder/etc/auth/ca.pem": already a renewed Splunk certificate: skipping renewal "/Applications/splunkforwarder/etc/auth/cacert.pem": already a renewed Splunk certificate: skipping renewal [DFS] Performing migration. [DFS] Finished migration. [Peer-apps] Performing migration. [Peer-apps] Finished migration. Init script installed at /Library/LaunchDaemons//com.splunk.plist. Init script is configured to run at boot. Splunk> Another one. Checking prerequisites... Management port has been set disabled; cli support for this configuration is currently incomplete. Invalid key in stanza [webhook] in /Applications/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/Applications/splunkforwarder/splunkforwarder-9.0.2-17e00c557dc1-darwin-universal2-manifest' PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... Done    
Hello All, I'm new to Splunk. I have the below table. I want to show the Previous Month Actual Cost in a single value panel, and the difference as a subscript and trend showing it is increased or d... See more...
Hello All, I'm new to Splunk. I have the below table. I want to show the Previous Month Actual Cost in a single value panel, and the difference as a subscript and trend showing it is increased or decreased comparing to Current Month Forecast Cost How can I structure the Simple XML to get the desired output?
Hello, We have installed Splunk Heavy Forwarder in IBM cloud and communicating with Indexers, but we are experiencing network flickering. In other words: strange things happening on the network. Wa... See more...
Hello, We have installed Splunk Heavy Forwarder in IBM cloud and communicating with Indexers, but we are experiencing network flickering. In other words: strange things happening on the network. Warning message that this Splunk internal data sent from forwarder to indexers, is too much for the indexers to handle. We have a Firewall installed in front of Indexer and denying the traffic stating 'TCP anomaly' 'Non-compliant TCP packets coming from multiple external sources were detected' Could you someone help me in this topic, if Forwarder is sending too much data or there is some Network issue. 
I am facing issue for certain sourcetype the indexed events are with the future time stamp. The data of these source type is getting indexed in splunk via HF and forwarded to IDX. The props is define... See more...
I am facing issue for certain sourcetype the indexed events are with the future time stamp. The data of these source type is getting indexed in splunk via HF and forwarded to IDX. The props is defined from the SH GUI. please help me understand and eradicate this issue. Example event data 12/12/2024 10:08:24 PM LogName=Application SourceName=Galaxy EventCode=1 EventType=4 Type=Information ComputerName=testserver.gtest.com TaskCategory=None OpCode=None RecordNumber=8425512 Keywords=Classic
We are running with Splunk Cloud version 9.0.2208.4 and all the other components such as HF and other client machines are running with a minimum of version 9.0 and above but we have few critical Wind... See more...
We are running with Splunk Cloud version 9.0.2208.4 and all the other components such as HF and other client machines are running with a minimum of version 9.0 and above but we have few critical Windows client machine running with Windows 2008 R2 OS. And there are very important critical logs needs to be ingested into Splunk from those machines. So can i install Splunk UF version 9.0.3 version in those Windows 2008 R2 machines will it be able to collect logs and is it supported? Or do I need to install some lower version and get them ingested? What is the recommended solution to get the logs ingested into Splunk. Kindly help on the same.    
Hi Team, I have downloaded the 9.0.3 UF version Windows 64 bit and installed the Splunk UF in my Microsoft Windows server 2016 Data Center and started the services. Post which when i have try to che... See more...
Hi Team, I have downloaded the 9.0.3 UF version Windows 64 bit and installed the Splunk UF in my Microsoft Windows server 2016 Data Center and started the services. Post which when i have try to check the Splunk status or do a restart of Splunk UF i am getting an warning message as below in the cmd prompt. So how to overcome this one. Warning: overriding %SPLUNK_HOME% setting in environment ("C:\Program Files\SplunkUniversalForwarder\bin\") with "C:\Program Files\SplunkUniversalForwarder". If this is not correct, edit C:\Program Files\SplunkUniversalForwarder\etc\splunk-launch.conf   When i checked the Splunk-launch.conf file i can see as below: # SPLUNK_HOME=C:\Program Files\SplunkUniversalForwarder   And when i hit the %SPLUNK_HOME% in run it directly navigating to below directory  C:\Program Files\SplunkUniversalForwarder\bin And when i checked the environment variables i can see SPLUNK_HOME as been set as C:\Program Files\SplunkUniversalForwarder\bin sohow to get rid of warning.
I am trying to extract Ips from the field called Text, where this field contains Ips & some string values ,  this field not contains only one IP all time, it may contain 2 Ips , 3 or 5 or more than t... See more...
I am trying to extract Ips from the field called Text, where this field contains Ips & some string values ,  this field not contains only one IP all time, it may contain 2 Ips , 3 or 5 or more than that.  Ips will not be same for all the events and the string "value" is same for all the events eg., Text= value 127.0.0.1,10.x.x.x, 10.x.x.1,10.x.x.3 Text= value 145.X.X.2, 19.x.x.3 Text= value 123.X.X.X So, i need to extract only ip separetely(irrespective of count of Ips) and "value" in one field.
How to generate threshold breaches health rule report by using API for certain month
I am looking for SPL which we can check the who can update the whitelist in lookup table and also the what changes are done , compare with previous one.   Thanks, Sahil
I have created a Splunk dashboard in which a panel consists of a table. The table has multiple columns wherein one column values consists of URL. The URL is clickable. I have used the following piece... See more...
I have created a Splunk dashboard in which a panel consists of a table. The table has multiple columns wherein one column values consists of URL. The URL is clickable. I have used the following piece of code to make it clickable. <drilldown> <condition field="abc"> <link target="_blank">$row.abc|n$</link> </condition> <condition field="*"></condition> </drilldown> Now, the problem is that the entire row is highlighted in blue and when I hover the mouse over any column, it gives an impression that the field value is clickable when it is not.  I want all the column values to stay in black and unselected. Only the URL values shall remain highlighted.
I have a field in my database datamodel called 'os.user'. And I have a lookup called 'userAccount'.  'userAccount' look up has a field called 'user' that is same as the 'os.user' field of database DM... See more...
I have a field in my database datamodel called 'os.user'. And I have a lookup called 'userAccount'.  'userAccount' look up has a field called 'user' that is same as the 'os.user' field of database DM. I want to know if all of my 'os.user' values are present in the 'userAccount' lookup.   My requierment is to know if my lookup is sufficient and contains all the user values 'os.user'. Could use some guisance on the SPL
Hello all,  since our Update to Splunk Enterprise 9.0.2 we experienced, that the Dashboard colors (Simple XML) changed completely. And the new colors are terrible! Did someone experience something ... See more...
Hello all,  since our Update to Splunk Enterprise 9.0.2 we experienced, that the Dashboard colors (Simple XML) changed completely. And the new colors are terrible! Did someone experience something similar after the update? And if yes: Were you able to get the colors back to the way they were? On 8.2.2.3   On 9.0.2 I would appreciate every hint. The new colors are something that cannot be presented to management  Thanks and best Regards  
I have a dashboard in which there is a Pie chart like below  I need help in this way that it has to show a label of event count and also the color details like green for success, blue for runnin... See more...
I have a dashboard in which there is a Pie chart like below  I need help in this way that it has to show a label of event count and also the color details like green for success, blue for running, red for error, Orange for Wait. It has to mention the details and also the event count. Also need help on the below issue for the bar chart.   In the above chart there are multiple columns with different colors for a single day. For this there should be a single column with different colors for single. Can someone please help me out on this.  
Hi, I'm create search query to monitor when 3 users create accounts in an hour: index=* sourcetype="WinEventLog:Security" EventCode=4720 | stats count as total_accounts by host | where total_a... See more...
Hi, I'm create search query to monitor when 3 users create accounts in an hour: index=* sourcetype="WinEventLog:Security" EventCode=4720 | stats count as total_accounts by host | where total_accounts >=3 | timechart span=1h sum(total_accounts) | eval time_range=timeRange("YYYY-MM-DD hh:mm:ss", "<start-time>", "<end-time>")  
Hi folks looking for some expert opinion. my logs contains many diff files. I want to capture the start and end time for each file  the logs looks like this timestamp 202301_filex_a_b.z started... See more...
Hi folks looking for some expert opinion. my logs contains many diff files. I want to capture the start and end time for each file  the logs looks like this timestamp 202301_filex_a_b.z started execution timestamp 202301_filex_a_b.z finished execution timestamp 202301_filey_e_f.z started execution timestamp 202301_filey_e_f.z finished execution The output would look something like filex | start timestamp | end timestamp | duration filey | start timestamp | end timestamp | duration I was able to do write diff search for start and end and then join them on the filename, but wondering if there is a better way to do it  
There  are five different hosts on our fleet on two different timezones with four sourcetypes on each. The problem is that the time that is being shown in Splunk Cloud isn't always the timestamp from... See more...
There  are five different hosts on our fleet on two different timezones with four sourcetypes on each. The problem is that the time that is being shown in Splunk Cloud isn't always the timestamp from the logs. They are different.  The hosts pass the data through an intermediate forwarder (universal forwarder running inside) which is in UTC.  There's also cases where one sourcetype from one host shows up/parses the correct time format but when they are coming from a different source, it doesn't.  I'll explain below: Five different hosts - host_A (MST), host_B (MST), host_C (UTC), host_D (UTC), host_E (UTC) Four different source types - src_W, src_X, src_Y, src_Z For host_A (MST) and host_B (MST), src_W is shown at the correct time. src_X and src_Y are not. For example - if src_X and src_Y have the timestamp of 05/02/2022 14:xx:xx. in splunk, it shows as 04/02/2022 7:xx:xx.  Between these two, src_Z only comes from host_A and the timestamp of 05/02/2022 14:xx:xx. in splunk, it shows as 04/02/2022 9:xx:xx.  For host_C (UTC) - if src_W and src_X have the timestamp of 05/02/2022 21:xx:xx. in splunk, it shows as 04/02/2022 2:xx:xx. host_C doesn't have Y and Z.  For host_D (UTC) - if src_Y has the timestamp of 05/02/2022 21:xx:xx. in splunk, it shows as 04/02/2022 2:xx:xx. host_D doesn't have the other sourcetypes. For host_E (UTC) - if src_Y has the timestamp of 05/02/2022 21:xx:xx. in splunk, it shows as 04/02/2022 2:xx:xx. host_E doesn't have the other sourcetypes. For src_Z timestamp of 05/02/2022 14:xx:xx. in splunk, it shows as 04/02/2022 9:xx:xx - just like in host_A. Sorry this might seem to be very complicated and it is in MST and not PST like I said before. My Splunk Cloud instance is also set to MST.  Below is how the log formatting looks like: This is how log from src_W is:  eni=xx.yy.zz.aa client_ip=- - - [05/Feb/2023:17:46:53 -0700] ... ... .... This is how log from  src_X is: DEBUG 2023-02-06 00:49:22 ... ... ... This is how log from src_Y is:  INFO 2023-02-06 00:50:02 ... ... ... This is how log from src_Z is: qwertyui Sun Feb 5 04:40:39 2023: Thank you for the help!
Hey everyone,   I'm at a loss for what this is, I always get stuck at install step 27 and then it throws these errors at me but I can't figure out what it is and how to fix it.   I followed t... See more...
Hey everyone,   I'm at a loss for what this is, I always get stuck at install step 27 and then it throws these errors at me but I can't figure out what it is and how to fix it.   I followed these steps https://docs.splunk.com/Documentation/SOARonprem/5.5.0/Install/InstallUnprivileged   Its being run on Red Hat Linux 7 over googles GCP.   I've attached a photo of the errors.   Any help is appreciated.