All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I would like to get the average of multiple fields in the same row but not all, would anyone be able to advise on this? query | chart latest(time_taken) by process server # Results ... See more...
Hi, I would like to get the average of multiple fields in the same row but not all, would anyone be able to advise on this? query | chart latest(time_taken) by process server # Results Process Local-1 Local-2 Avg(Local) Remote-1 Remote-2 A 1 2 1.5 2 2 B 1 3 2 3 3 I would like to add an Avg(Local) field which gives me the average time taken by the processes running on Local-1 and Local-2. Appreciate any suggestions, thanks!  
I have list of items plotted in line graph which is basically time-series data. I would like to have an option to select one or multiple items alone from that list and see the graph   Like in below... See more...
I have list of items plotted in line graph which is basically time-series data. I would like to have an option to select one or multiple items alone from that list and see the graph   Like in below graph has two items listed in time series graph. How I can view only one item or multiple item when there are more items ?  Can i add a search for the list of items on the right along with the list ?    
How to display authentication error in python for Splunk connection?
How to display Oneshot ResponseReader Object.
Hi All, I was trying to generate the results from my search more than 10,000 results. It displayed a message like results are being truncated. Is there any way to change the limit for my column chart... See more...
Hi All, I was trying to generate the results from my search more than 10,000 results. It displayed a message like results are being truncated. Is there any way to change the limit for my column chart in the XML file. Here, I don't have permission to change visualization.conf file. So asking anyway to change through in XML source?
Hi all, Need help for the below qery I have st of application logs and all are in text format which are genratng every day. So i need to send all those logs to the splunk with proper field extr... See more...
Hi all, Need help for the below qery I have st of application logs and all are in text format which are genratng every day. So i need to send all those logs to the splunk with proper field extraction. Please assist.        
Hi, I am trying to use case keyword to solve a multiple nested statement  but it is just giving me output for the else value, it seems like it is not going inside any other statement to check, Coul... See more...
Hi, I am trying to use case keyword to solve a multiple nested statement  but it is just giving me output for the else value, it seems like it is not going inside any other statement to check, Could anyone please help me here. I tired using multiple if statement with eval still I was having the same issue. Problem statement : I want to compare the value of status-fail and status-success and on the basis of that we need to generate the output case1 : if value of status-fail =0 and status-success>0 ---> successful logins case2: if value of status-fail >0 and status-success>0 --->  multi-successful logins case3: if value of status-fail >0 and status-success=0 ---> multi-fail case4: if value of status-fail >0  ---> fail logins Below is the query what I am using : table hqid, httpStatus | eval status-success=if(httpStatus="200",1,0) | eval status-fail= if(httpStatus != "200",1,0) | stats sum(status-success) as status-success, sum(status-fail) as status-fail by hqid | eval status = case(status-fail = 0 AND status-success > 0, "successful-logins", status-fail > 0 AND status-success > 0, "multi-success", status-fail > 0 AND status-success=0, "multi-fail", status-fail > 0, "fail",1=1,"Others")
Gentlemen, We are ingesting Windows SYSmon logs via TA-microsoft-sysmon , and the raw events are showing in XML format.   There are couple of fields that did not get extracted and even with IFX, the... See more...
Gentlemen, We are ingesting Windows SYSmon logs via TA-microsoft-sysmon , and the raw events are showing in XML format.   There are couple of fields that did not get extracted and even with IFX, the accuracy of extracting these 2 fields isn't working 100%.   Below is one of the XML tags / elements from my raw event.  Can someone pls assist me with regex for extracting  techqniue_id and technique_name ??   As you can see, these 2 are embedded within the "RuleName" tag.       <Data Name='RuleName'>technique_id=T1055.001,technique_name=Dynamic-link Library</Data>       I have tried on regex101.com but can't get my capture group to extract these 2 values.  At the end of the day, i want 2 fields  techqniue_id ( with a value=T1055.001)   and technique_name ( value = Dynamic-link Library) to show up under "Interesting fields" . Thank you in advance
From the below Log: aoauwersdfx01a-mgt.example.com NewDecom: Info: 164807335647.901 0 10.200.111.06 NONE/504 0 GET http://wpad.example.com/wpad.dat - NONE/wpad.example.com Need to extract the fiel... See more...
From the below Log: aoauwersdfx01a-mgt.example.com NewDecom: Info: 164807335647.901 0 10.200.111.06 NONE/504 0 GET http://wpad.example.com/wpad.dat - NONE/wpad.example.com Need to extract the fields: Field 1: result=NON/504 change to status=504 Field 2: url=http://wpad.example.com/wpad.dat change to url=wpad.example.com Need the regular expression for this.  
I have a device that is reporting to the splunk through syslog, that device first goes through an F5 and the F5 gives me the traffic to my heavy forwarders. The problem is that the year of the time s... See more...
I have a device that is reporting to the splunk through syslog, that device first goes through an F5 and the F5 gives me the traffic to my heavy forwarders. The problem is that the year of the time stamp is out of date, the date when a server event is generated is 2022 and in the search head, I see it as 2017. I don't know if the problem is from the origin server at the syslog protocol level or in the transport layer or at the collection level within the splunk. This issue only occurs on 3 computers out of 10. I have reviewed the prop settings, but if I don't see the year in the source data, I will hardly be able to modify the timestamp. Regards    
(1)  index=blah  Product IN (Cuteftp,Filezilla) (2)  | rex field=Image "(?<values_Image>[^\\\\]+$)" (3)  | lookup test.csv Image as values_Image OUTPUT Image (4)  | eval match=if(values_Image == I... See more...
(1)  index=blah  Product IN (Cuteftp,Filezilla) (2)  | rex field=Image "(?<values_Image>[^\\\\]+$)" (3)  | lookup test.csv Image as values_Image OUTPUT Image (4)  | eval match=if(values_Image == Image, "yes", "no") | table _time Product Company Description ImageLoaded Image values_Image match (1) I am searching index=blah  where "Product" = Cuteftp or Filezilla (2) From my results I am removing everything before the last backslash, and the new field  is going to be  called "values_Image"  (3) I am checking the "Image" column  in the lookup file (test.csv) to see if it matches "values_Image" from my Splunk results (4) If there is a match, then I see "yes" in the match column in Splunk. If there is no match I see a "no" The problem I have: When match =yes the Image field in Splunk  is populated with the value from the Image field in the lookup file (test.csv) . This is good When match=no  the  Image field in Splunk is not populated with the value from the the Image field in the lookup file (test.csv) . This is my problem
We have a large number of hosts logging to Splunk via the Universal Forwarder. We also have the splunk servers including search heads, heavy forwarders and indexers logging their local OS logs to spl... See more...
We have a large number of hosts logging to Splunk via the Universal Forwarder. We also have the splunk servers including search heads, heavy forwarders and indexers logging their local OS logs to splunk as well. All systems are linux OS. We use a custom app to collect the local linux OS logs in /var/log. All hosts running the Universal Forwarder and the search heads and the heavy forwarders get the app from the deployment server so they all have the identical app to collect the linux os logs. Recently we wanted to divide up the indexes the logs are sent to based on processes. In our custom app on the indexers we created an entry in props and the transforms and deployed it. We then used the deployment server and pushed the new sourcetype out to all hosts. All of the hosts logs coming from the UF's worked fine and the indexers began to divide up the linux OS logs from them as expected. However the splunk search heads and heavy forwarders local linux OS logs continued to go to the old index even though their sourcetype did change to reflect the new sourcetype we created and deployed via the deployment server. Question: why does this config work fine for the hosts using the UF but not the splunk servers themselves if they all have the same app installed from the same deployment server and are all logging to the same indexer? props.conf [company_linux_messages_syslog] pulldown_type = 1 MAX_TIMESTAMP_LOOKAHEAD = 32 TIME_FORMAT = %b %d %H:%M:%S TRANSFORMS-newindex = company_syslog_catchall, company_syslog, syslog-host REPORT-syslog = syslog-extractions SHOULD_LINEMERGE = False category = Operating System description = Format found within the Linux log file /var/log/messages transforms.conf [company_syslog] DEST_KEY =_MetaData:Index REGEX = ^[A-Z][a-z]{2}\s\d{1,2}\s\d{2}:\d{2}:\d{2}\s.*?\s*(docker|tkproxy|auditd|dockerd)\[ FORMAT = syslog [company_syslog_catchall] DEST_KEY =_MetaData:Index REGEX = . FORMAT = syslog_catchall
Hi, I am running a single instance Splunk deployment on Linux and am planning on upgrading a bunch of Apps on my Splunk Enterprise server (there are about 7 that need upgrading ... a mix of Apps and ... See more...
Hi, I am running a single instance Splunk deployment on Linux and am planning on upgrading a bunch of Apps on my Splunk Enterprise server (there are about 7 that need upgrading ... a mix of Apps and Add-ons) . I was intending to upgrade the Apps using the GUI. My question is whether it is better to restart when prompted by Splunk (potentially after each app is upgraded) or whether it is possible to do all of the upgrades and then do a single restart of the Splunkd service at the end?   Thanks,
Hey there! I used vmware to clone a host. i tried changing server.conf and inputs.conf seven ways from Sunday. The process starts up without problems, but when i go to our local search engine: rj... See more...
Hey there! I used vmware to clone a host. i tried changing server.conf and inputs.conf seven ways from Sunday. The process starts up without problems, but when i go to our local search engine: rjbandwpoc2 source="/var/log/secure". nothing shows up. thanks for any pointers.
I'm trying to create a column chart (bar graph) in my Splunk (v8.1.3) dashboard that shows the availabilities of a given service for various instances, whereby the bars showing percent availability c... See more...
I'm trying to create a column chart (bar graph) in my Splunk (v8.1.3) dashboard that shows the availabilities of a given service for various instances, whereby the bars showing percent availability change color depending on their value.  For any availability greater than or equal to 0 and less than 90, I want the bar to be red.  For any availability greater than or equal to 90 and less than or equal to 100, I want the bar to be green.  For any availability outside of those ranges, the default color is fine.  Seems simple enough and this question has been asked in several different forms several times over the years, but I just can't seem to get mine to work.  The availability bars just keep showing in blue, and I've tried with both the rangemap and eval methods.  See the XML of my chart based on the rangemap solution below.  Any help is greatly appreciated.   <chart> <title>rhnsd Availability - my-db-*</title> <search> <query>index=om host="my-db-*" sourcetype=ps rhnsd | stats count by host | addinfo | eval availability=if(count&gt;=(info_max_time-info_min_time)/1800,100,count/floor((info_max_time-info_min_time)/1800)*100) | rangemap field=availability red=0-90 green=90-100 | fields host availability</query> <earliest>$query_time.earliest$</earliest> <latest>$query_time.latest$</latest> </search> <option name="charting.axisY.maximumNumber">100</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.fieldColors">{"red":0xdc4e41,"green":0x53a051}</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.placement">none</option> <option name="refresh.display">progressbar</option> </chart>    
I need an alert where you get this message "Attempting to send email to:<email>" but you don't ever get the message "Email sent successfully" within 1-2 minutes. The problem is if you don't get the s... See more...
I need an alert where you get this message "Attempting to send email to:<email>" but you don't ever get the message "Email sent successfully" within 1-2 minutes. The problem is if you don't get the second message, a restart will be required before you do. So I need event 1, but no event 2
I am trying to figure out the following and would greatly appreciate some help: I have an alert which's search query looks for a certain event within the last 30 days. If the event of interest oc... See more...
I am trying to figure out the following and would greatly appreciate some help: I have an alert which's search query looks for a certain event within the last 30 days. If the event of interest occurs, an alert shall be triggered. This is working fine. Now, because I have to look for events in the last 30 days, I do not want the exact same event to trigger another alert. I do however, want to trigger another alert if the event occurs on say....a different host. By my understanding, this can be acheived by the following -Use trigger type "for each event" -Suppress for 30 days: events with the field _time When the event in question has triggered, we navigate to triggered alerts and select "show events" I want to be able to see only the very event that triggered that very same, recent alert. I want this because it helps the person who is investigating the issue to immediately see what asset is affected. Is it possible to do this?  
Hello,   Is anyone using Splunk to dashboard social media (Twitter, snapchat)?  If you are how did you do it?  and what are you monitoring?   Thank You,   James
Hello, My requirement is to display the results of related Service and KPI name if any of the below tile turns Yellow, red etc except Green (this we can verify using alert_level)   Created... See more...
Hello, My requirement is to display the results of related Service and KPI name if any of the below tile turns Yellow, red etc except Green (this we can verify using alert_level)   Created below look up table #sresellerpricing   Below is the query i'm using but not getting any results. index=itsi_summary KPI IN ("ServiceHealthScore") alert_level>1 is_entity_in_maintenance=0 serviceid IN ("e46d2d3b-7b5a-40d4-aebf-54aa0b394e25", "23ff8e98-59d0-48d6-a8d2-8a1385d26bd8", "372848b0-380a-4fca-a78c-816747e00cf3") | eval service_kpi_id=serviceid."-".kpiid | search NOT service_kpi_id IN ( [ search index=itsi_summary KPI IN ("ServiceHealthScore") alert_level>1 is_entity_in_maintenance=0 serviceid IN ("e46d2d3b-7b5a-40d4-aebf-54aa0b394e25", "23ff8e98-59d0-48d6-a8d2-8a1385d26bd8", "372848b0-380a-4fca-a78c-816747e00cf3") | eval service_kpi_id=serviceid."-".kpiid | dedup service_kpi_id | return $service_kpi_id ] ) | lookup sresellerpricing key AS kpiid OUTPUT Service | dedup kpi Service | table Service kpi
Hi All, I was working on a case where i have 2 fields extracted as "actordisplayName" & "targetUser" in the same raw log. actordisplayName - who initiated the change, targetUser - to which user i... See more...
Hi All, I was working on a case where i have 2 fields extracted as "actordisplayName" & "targetUser" in the same raw log. actordisplayName - who initiated the change, targetUser - to which user it was changed. index=something  displayMes="User update password" | where actordisplayName!= targetUser | table _time user, displayMes, actordisplayName, targetUser outcome.result Running this for 30 days Requirement: I need to search only for users where actordisplayName & targetUser is not same. Eg: I want only the results for my admin/someone who has done password reset for me, I don't want the results for me resetting the passwords for my account. In short i need results for where actordisplayName & targetUser is not same.