All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all,  I am a new to Splunk and looking for some guidance.  I have created a new sourcetype in my inputs.conf file on my Linux forwarder workstation. It is monitoring /home/*/.bash_history file t... See more...
Hi all,  I am a new to Splunk and looking for some guidance.  I have created a new sourcetype in my inputs.conf file on my Linux forwarder workstation. It is monitoring /home/*/.bash_history file to check for user's commands. I can see data coming through on the Splunk UI, but I don't know how to assign a field to the value.  Example: it list the date time of the event and the command. I was use to seeing something like this: "exe=su -" or "exe="cat inputs.conf". Instead I get the following: 8/17/20 1:52:35.000 PM su - 8/17/20 1:52:32.000 PM cat inputs.conf I am trying to create a table for this. How would I create a field "action" and assign the commands such as "su -" and "cat inputs.conf" to that field?
Hello, I have a raw data file from which I am trying to extract data and create a dashboard out of it. From this raw file I am using regex and mvexpand to parse the data for individual vm's and stor... See more...
Hello, I have a raw data file from which I am trying to extract data and create a dashboard out of it. From this raw file I am using regex and mvexpand to parse the data for individual vm's and storing that information in a table. Now I want to use the data that is store in the table to extract fields and data from it.  I have regex query to extract particular fields as data. However, I am unable to give the table as an input for to make those regex work.  Splunk query:  index="test_log" source="/var/tmp/logs/test.log" | rex max_match=0 field=_raw "(?<lineData>[^;]+)" | mvexpand lineData | fields lineData | table lineData   Sample raw log file:  VM_NAME: vm1 Process Process Count ProcessTestA 0 ProcessTestB 1; VM_NAME: vm2 Process Process Count ProcessTestA 0 ProcessTestB 3;   Sample Data in table:  VM_NAME: vm1 ProcessTestA 0 ProcessTestB 1 --------------------------------------- VM_NAME: vm2 ProcessTestA 0 ProcessTestB 3   Final Data Needed: VM_NAME vm1 Process  Process Count ProcessTestA 0 ProcessTestB 1       Please suggest how can I do this.  Thank you  
Hi Guys, I have a .csv lookup file that maintain the 'inactive' accounts list. can anyone help me with a query to remove one of the username from the list?
Can someone show me what the regex expression for the below extract would be? & can you show me how you arrived to that conclusion, NB i have tried reg101 and Im still confused.   i have tried this... See more...
Can someone show me what the regex expression for the below extract would be? & can you show me how you arrived to that conclusion, NB i have tried reg101 and Im still confused.   i have tried this expression rex field=_raw "ERROR - (?<Error_Message>.+)"   2020-08-17 16:34:02,141 [68618-1397] ERROR NodePoolServlet - [urn:uuid:6144BCB27826B3BECC1597674752077153] Bot Manager can't find a free Bot to execute a robotic task. Please check that Bots with requested capabilties are alive using the Healtcheck API and the Bot Source size in Control Tower is equal to the number of available Bots.
I have a saved search which runs every month and looks at my vulnerability events and outputs the results into a lookup table. I am deduping the "Plugin ID" value so that I am only getting unique vul... See more...
I have a saved search which runs every month and looks at my vulnerability events and outputs the results into a lookup table. I am deduping the "Plugin ID" value so that I am only getting unique vulnerabilities in my lookup table.  I have also added 3 extra columns to the lookup table, but the search results from the saved search will not have these columns .  I'm struggling with how to retain the values of those columns while still appending new results to the table. The search below that I have tried, is retaining the extra columns but it is duplicating the results each time the search is run.   I've tried not using the append=t with the outputlookup but that just replaces my whole lookup table and deletes the extra columns that I need in there.  Is there any other way that I can use outputlookup and retain the extra columns but still deduping the plugin ID? Thank you!       | stats values(state) as State, values(severity) as Severity, values(tags) as "Tags", values(plugin.name) as "Plugin Name", values(plugin_publication_date) as "Plugin Publication Date", count by plugin_id | rename plugin_id as "Plugin ID", count as "Total Hosts" | eval Severity=lower(Severity) | sort num(Severity), -num("Total Hosts") | inputlookup Vulnerabilities append=t | dedup "Plugin ID" | outputlookup Vulnerabilities append=t    
Does anything change in universal forwarder if I change the servername ??? SPLUNK_SERVER_NAME=SplunkForwarder1234 ??? I checked in linux by systemctl status, I didn't find any change in splunk daem... See more...
Does anything change in universal forwarder if I change the servername ??? SPLUNK_SERVER_NAME=SplunkForwarder1234 ??? I checked in linux by systemctl status, I didn't find any change in splunk daemon, it was named as "splunkd" itself . Or is it only affect for Windows OS service in an UF??   PS: I know it shouldn't be changed ideally, but still experimenting few things...
I have couple of related things to be addressed.  I want to remove the search button from the panels so that users from specific roles can only see the dashboards.  While the 'user' cannot see th... See more...
I have couple of related things to be addressed.  I want to remove the search button from the panels so that users from specific roles can only see the dashboards.  While the 'user' cannot see the search button, I want admins to see that option.  So putting <option name="link.openSearch.visible">false</option> in the XML is ruled out.  Is there any other way?   I do not want users to have the search capabilities as well. If I am doing that, the searches in dashboards are also not working. Can I make them see the dashboards but take away the capabilities of running the ad-hoc searches?     
Consider the below types of events fields  :     OS         transaction      numbers Events:     Win        purchased           150                    Unix       purchased           200          ... See more...
Consider the below types of events fields  :     OS         transaction      numbers Events:     Win        purchased           150                    Unix       purchased           200                    Win        sold                         100                    Unix       sold                         125 I want the results to be:   OS      InHand(purchased-sold) Win      50 Unix     75   How to do this?
I want to create an alert that monitors 5+ authentication failures for VPN login within an hour, but I'm not sure how to get the alert to monitor for 5+ failures for any single user. Here's an examp... See more...
I want to create an alert that monitors 5+ authentication failures for VPN login within an hour, but I'm not sure how to get the alert to monitor for 5+ failures for any single user. Here's an example log: [2020-08-17 11:40:10,550] [IG Audit Writer] [INFO ] [IG.AUDIT] [AUD7505] [VPN_AD_Group/user] The Radius server ise_servers rejected authentication for user VPN_AD_Group/user.
The following error appears "The number of search artifacts in the dispatch directory is higher than recommended (count=5155, warning threshold=5000) and could have an impact on search performance.... See more...
The following error appears "The number of search artifacts in the dispatch directory is higher than recommended (count=5155, warning threshold=5000) and could have an impact on search performance. Remove excess search artifacts using the "splunk clean-dispatch" CLI command, and review artifact retention policies in limits.conf and savedsearches.conf. You can also raise this warning threshold in limits.conf / dispatch_dir_warning_size." After reviewing for a while in the forum I find that they mention the root /var2/splunk/splunk/var/run/splunk/dispatch  where they suggest to erase the oldest somewhere they mention the jobs where it is recommended that they are not in real time When entering via the web I find more than 700 jobs of which 10 are in progress and take more than 25 minutes, these jobs correspond to applications such as: SA-ThreatIntelligence DA-ESS-NetworkProtection DA-ESS-EndpointProtection SA-AccessProtection how I should proceed?
Hi all, I am very new to Splunk and I am trying to migrate from Python 2 to 3 as we are migrating from Splunk 7 to 8. I used the Splunk readiness app and was able to make all the files in my app co... See more...
Hi all, I am very new to Splunk and I am trying to migrate from Python 2 to 3 as we are migrating from Splunk 7 to 8. I used the Splunk readiness app and was able to make all the files in my app compatible with Python 2 and 3 using Six and Future library. I also changed the global setting in server.conf to Python 3 but still I am not able to log in using my API key.  The API key is working as I checked it with splunk 7 and it is working seamlessly. Please help.
Hello, First of all, thanks for any help you may be able to give me. I would appreciate some help with a problem I'm having. I uploaded a csv file to my Splunk that contains only Date/Value pairs. B... See more...
Hello, First of all, thanks for any help you may be able to give me. I would appreciate some help with a problem I'm having. I uploaded a csv file to my Splunk that contains only Date/Value pairs. Below is an example of the data. I would like to select the Value of 2020-07-31 and the value of 2020-06-30 and calculate the percentage change over that month. Additionally, I'd like to be able to calculate the percentage change but I think I may be able to get that on my own if I can get a hand with the first part. I'd share my queries so far but I wasted the whole day yesterday trying to get this to work with little useful progress.  
Hello! I faced with issue when I'm trying to delete old events using command " | delete". I find all events that I need in date range and then enter | delete command. After that scan is started and ... See more...
Hello! I faced with issue when I'm trying to delete old events using command " | delete". I find all events that I need in date range and then enter | delete command. After that scan is started and never ended. In job inspector I see component "dispatch.fetch.rcp.phase_0" with increasing Invocations and duration values. What could cause this issue? Why the delete job was not finished? Splunk Enterprise Server 7.2.5.1, two indexes in cluster, two search head in cluster.
Can someone explain why when I create a lookup file does my windows instance of Splunk provide the user name in the audittrail log but on my linux instance the user name is "N/A"? Is there a way to e... See more...
Can someone explain why when I create a lookup file does my windows instance of Splunk provide the user name in the audittrail log but on my linux instance the user name is "N/A"? Is there a way to ensure the user name is captured in audittrail when a user creates a lookup file?
Hi,   I'm trying to reduce the number of alerts in Splunk, at the moment we receive splunk alert on queue size every 30 min. The problem here is the queue size/depth is the same when it triggers ev... See more...
Hi,   I'm trying to reduce the number of alerts in Splunk, at the moment we receive splunk alert on queue size every 30 min. The problem here is the queue size/depth is the same when it triggers every 30 min, I need a solution so that we could check the previous queue size and the current queue size and stop alerting if they are same. index=day sourcetype="mqmon" EventType="QueueDepth" QDepth>0  | eval time=strftime(_time,"%d %b %Y %H:%M %p") | stats latest(QDepth) As QueueSize,max(time) As LastEvent by host, QMan, QName
Hi , I installed splunk on a linux server but it sent me the following error "Warning: web interface does not seem to be avaible" in Splunk Enterprise.      I searched in /opt/splunk/var... See more...
Hi , I installed splunk on a linux server but it sent me the following error "Warning: web interface does not seem to be avaible" in Splunk Enterprise.      I searched in /opt/splunk/var/log/splunk/ but I can't find anything.  Can you help me, please? Regards. 
I have a data file , this source file does not contain any data on most days .. Its a valid scenario only . But since it does not have any data my panel in dashboards shows "" No results found"   i... See more...
I have a data file , this source file does not contain any data on most days .. Its a valid scenario only . But since it does not have any data my panel in dashboards shows "" No results found"   index="xyz" source="*RatedUsg_OutSeq.dat" | eventstats max(Extract_Time) AS most_recent | where (Extract_Time = most_recent) | table Extract_Time File_Name File_Sequence Source_System_Key.   i am new to Splunk . My requirment is on those days where .dat file is empty it should dispaly a message "No Records Today" on other days that particular query should work.   Please help .I needed it to procced further .
Hi guys, I'd like to be able to allow 'insecure' logins for my dashboards to be used with an internal signage solution. I have changed the value of  'enable_insecure_login = False' to 'True' within... See more...
Hi guys, I'd like to be able to allow 'insecure' logins for my dashboards to be used with an internal signage solution. I have changed the value of  'enable_insecure_login = False' to 'True' within var\run\splunk\merged\web.conf. I can save, close, then reopen the file and the value is still set as 'True'. HOWEVER... If I restart my Splunk server (to pick up the new change), the value reverts back to 'False' and I'm unable to access the dashboard using the insecure URL. Does anyone know why the 'web.conf' file may reset the change I made after a restart? Thanks for your help! Best wishes, D
Hello Everyone, Good Day! I have below queries on ServiceNow Integration with Splunk tool, If Integration is happening via Mid server, please let us know what is the end point we will sharing wit... See more...
Hello Everyone, Good Day! I have below queries on ServiceNow Integration with Splunk tool, If Integration is happening via Mid server, please let us know what is the end point we will sharing with Splunk tool. Is it a servicenow instance url or midserver specific endpoint ? What is the approach followed by splunk to push event to servicenow is it by                                     1.custom generating search commands                                                                                                                     2.custom streaming search commands                                                                                                                           3.alert-triggered scripts How the push event filtering happens at splunk? We have gone through Splunk documentation but its not clear. Any help or reference is much appreciated. Thanks in Advance. Reagrds, Bharath Doppalapudi.
Hello Guys, I've run into a problem. I created a dashboard where you can edit and delete lookup entries by user input. The problem is that when the user deletes the last entry of the lookup, the hea... See more...
Hello Guys, I've run into a problem. I created a dashboard where you can edit and delete lookup entries by user input. The problem is that when the user deletes the last entry of the lookup, the header of the CSV disappears and when a Saved Search is run, it cannot find the defined fields. Does anyone have a solution to keep the header fields even if the last line is deleted, apart from creating a KV store? With friendly greetings Christoph