All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

(1)  index=blah  Product IN (Cuteftp,Filezilla) (2)  | rex field=Image "(?<values_Image>[^\\\\]+$)" (3)  | lookup test.csv Image as values_Image OUTPUT Image (4)  | eval match=if(values_Image == I... See more...
(1)  index=blah  Product IN (Cuteftp,Filezilla) (2)  | rex field=Image "(?<values_Image>[^\\\\]+$)" (3)  | lookup test.csv Image as values_Image OUTPUT Image (4)  | eval match=if(values_Image == Image, "yes", "no") | table _time Product Company Description ImageLoaded Image values_Image match (1) I am searching index=blah  where "Product" = Cuteftp or Filezilla (2) From my results I am removing everything before the last backslash, and the new field  is going to be  called "values_Image"  (3) I am checking the "Image" column  in the lookup file (test.csv) to see if it matches "values_Image" from my Splunk results (4) If there is a match, then I see "yes" in the match column in Splunk. If there is no match I see a "no" The problem I have: When match =yes the Image field in Splunk  is populated with the value from the Image field in the lookup file (test.csv) . This is good When match=no  the  Image field in Splunk is not populated with the value from the the Image field in the lookup file (test.csv) . This is my problem
We have a large number of hosts logging to Splunk via the Universal Forwarder. We also have the splunk servers including search heads, heavy forwarders and indexers logging their local OS logs to spl... See more...
We have a large number of hosts logging to Splunk via the Universal Forwarder. We also have the splunk servers including search heads, heavy forwarders and indexers logging their local OS logs to splunk as well. All systems are linux OS. We use a custom app to collect the local linux OS logs in /var/log. All hosts running the Universal Forwarder and the search heads and the heavy forwarders get the app from the deployment server so they all have the identical app to collect the linux os logs. Recently we wanted to divide up the indexes the logs are sent to based on processes. In our custom app on the indexers we created an entry in props and the transforms and deployed it. We then used the deployment server and pushed the new sourcetype out to all hosts. All of the hosts logs coming from the UF's worked fine and the indexers began to divide up the linux OS logs from them as expected. However the splunk search heads and heavy forwarders local linux OS logs continued to go to the old index even though their sourcetype did change to reflect the new sourcetype we created and deployed via the deployment server. Question: why does this config work fine for the hosts using the UF but not the splunk servers themselves if they all have the same app installed from the same deployment server and are all logging to the same indexer? props.conf [company_linux_messages_syslog] pulldown_type = 1 MAX_TIMESTAMP_LOOKAHEAD = 32 TIME_FORMAT = %b %d %H:%M:%S TRANSFORMS-newindex = company_syslog_catchall, company_syslog, syslog-host REPORT-syslog = syslog-extractions SHOULD_LINEMERGE = False category = Operating System description = Format found within the Linux log file /var/log/messages transforms.conf [company_syslog] DEST_KEY =_MetaData:Index REGEX = ^[A-Z][a-z]{2}\s\d{1,2}\s\d{2}:\d{2}:\d{2}\s.*?\s*(docker|tkproxy|auditd|dockerd)\[ FORMAT = syslog [company_syslog_catchall] DEST_KEY =_MetaData:Index REGEX = . FORMAT = syslog_catchall
Hi, I am running a single instance Splunk deployment on Linux and am planning on upgrading a bunch of Apps on my Splunk Enterprise server (there are about 7 that need upgrading ... a mix of Apps and ... See more...
Hi, I am running a single instance Splunk deployment on Linux and am planning on upgrading a bunch of Apps on my Splunk Enterprise server (there are about 7 that need upgrading ... a mix of Apps and Add-ons) . I was intending to upgrade the Apps using the GUI. My question is whether it is better to restart when prompted by Splunk (potentially after each app is upgraded) or whether it is possible to do all of the upgrades and then do a single restart of the Splunkd service at the end?   Thanks,
Hey there! I used vmware to clone a host. i tried changing server.conf and inputs.conf seven ways from Sunday. The process starts up without problems, but when i go to our local search engine: rj... See more...
Hey there! I used vmware to clone a host. i tried changing server.conf and inputs.conf seven ways from Sunday. The process starts up without problems, but when i go to our local search engine: rjbandwpoc2 source="/var/log/secure". nothing shows up. thanks for any pointers.
I'm trying to create a column chart (bar graph) in my Splunk (v8.1.3) dashboard that shows the availabilities of a given service for various instances, whereby the bars showing percent availability c... See more...
I'm trying to create a column chart (bar graph) in my Splunk (v8.1.3) dashboard that shows the availabilities of a given service for various instances, whereby the bars showing percent availability change color depending on their value.  For any availability greater than or equal to 0 and less than 90, I want the bar to be red.  For any availability greater than or equal to 90 and less than or equal to 100, I want the bar to be green.  For any availability outside of those ranges, the default color is fine.  Seems simple enough and this question has been asked in several different forms several times over the years, but I just can't seem to get mine to work.  The availability bars just keep showing in blue, and I've tried with both the rangemap and eval methods.  See the XML of my chart based on the rangemap solution below.  Any help is greatly appreciated.   <chart> <title>rhnsd Availability - my-db-*</title> <search> <query>index=om host="my-db-*" sourcetype=ps rhnsd | stats count by host | addinfo | eval availability=if(count&gt;=(info_max_time-info_min_time)/1800,100,count/floor((info_max_time-info_min_time)/1800)*100) | rangemap field=availability red=0-90 green=90-100 | fields host availability</query> <earliest>$query_time.earliest$</earliest> <latest>$query_time.latest$</latest> </search> <option name="charting.axisY.maximumNumber">100</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.fieldColors">{"red":0xdc4e41,"green":0x53a051}</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.placement">none</option> <option name="refresh.display">progressbar</option> </chart>    
I need an alert where you get this message "Attempting to send email to:<email>" but you don't ever get the message "Email sent successfully" within 1-2 minutes. The problem is if you don't get the s... See more...
I need an alert where you get this message "Attempting to send email to:<email>" but you don't ever get the message "Email sent successfully" within 1-2 minutes. The problem is if you don't get the second message, a restart will be required before you do. So I need event 1, but no event 2
I am trying to figure out the following and would greatly appreciate some help: I have an alert which's search query looks for a certain event within the last 30 days. If the event of interest oc... See more...
I am trying to figure out the following and would greatly appreciate some help: I have an alert which's search query looks for a certain event within the last 30 days. If the event of interest occurs, an alert shall be triggered. This is working fine. Now, because I have to look for events in the last 30 days, I do not want the exact same event to trigger another alert. I do however, want to trigger another alert if the event occurs on say....a different host. By my understanding, this can be acheived by the following -Use trigger type "for each event" -Suppress for 30 days: events with the field _time When the event in question has triggered, we navigate to triggered alerts and select "show events" I want to be able to see only the very event that triggered that very same, recent alert. I want this because it helps the person who is investigating the issue to immediately see what asset is affected. Is it possible to do this?  
Hello,   Is anyone using Splunk to dashboard social media (Twitter, snapchat)?  If you are how did you do it?  and what are you monitoring?   Thank You,   James
Hello, My requirement is to display the results of related Service and KPI name if any of the below tile turns Yellow, red etc except Green (this we can verify using alert_level)   Created... See more...
Hello, My requirement is to display the results of related Service and KPI name if any of the below tile turns Yellow, red etc except Green (this we can verify using alert_level)   Created below look up table #sresellerpricing   Below is the query i'm using but not getting any results. index=itsi_summary KPI IN ("ServiceHealthScore") alert_level>1 is_entity_in_maintenance=0 serviceid IN ("e46d2d3b-7b5a-40d4-aebf-54aa0b394e25", "23ff8e98-59d0-48d6-a8d2-8a1385d26bd8", "372848b0-380a-4fca-a78c-816747e00cf3") | eval service_kpi_id=serviceid."-".kpiid | search NOT service_kpi_id IN ( [ search index=itsi_summary KPI IN ("ServiceHealthScore") alert_level>1 is_entity_in_maintenance=0 serviceid IN ("e46d2d3b-7b5a-40d4-aebf-54aa0b394e25", "23ff8e98-59d0-48d6-a8d2-8a1385d26bd8", "372848b0-380a-4fca-a78c-816747e00cf3") | eval service_kpi_id=serviceid."-".kpiid | dedup service_kpi_id | return $service_kpi_id ] ) | lookup sresellerpricing key AS kpiid OUTPUT Service | dedup kpi Service | table Service kpi
Hi All, I was working on a case where i have 2 fields extracted as "actordisplayName" & "targetUser" in the same raw log. actordisplayName - who initiated the change, targetUser - to which user i... See more...
Hi All, I was working on a case where i have 2 fields extracted as "actordisplayName" & "targetUser" in the same raw log. actordisplayName - who initiated the change, targetUser - to which user it was changed. index=something  displayMes="User update password" | where actordisplayName!= targetUser | table _time user, displayMes, actordisplayName, targetUser outcome.result Running this for 30 days Requirement: I need to search only for users where actordisplayName & targetUser is not same. Eg: I want only the results for my admin/someone who has done password reset for me, I don't want the results for me resetting the passwords for my account. In short i need results for where actordisplayName & targetUser is not same.
Hi all   My first post on this Community. I am a veteran of another BI tool that starts with a Q, and very keen to learn new tools and play with new toys!   I scanned on community but could n... See more...
Hi all   My first post on this Community. I am a veteran of another BI tool that starts with a Q, and very keen to learn new tools and play with new toys!   I scanned on community but could not find a relevant answer, so please forgive if this is not a new subject.   I installed a forwarder on my Pi Zero, but cannot start it. Downloaded the ARM version with  sudo wget -O splunkforwarder-8.2.5-77015bc7a462-Linux-armv8.tgz "https://download.splunk.com/products/universalforwarder/releases/8.2.5/linux/splunkforwarder-8.2.5-77015bc7a462-Linux-armv8.tgz"   Then untarred it: sudo tar -xvzf splunkforwarder-8.2.5-77015bc7a462-Linux-armv8.tgz   Then tried to start: sudo ./splunk start --accept-license I just get this weird error message. No idea how to proceed.  
Hi Folks, I have been working on a dashboard that displays result as a timechart grouping by days. I see results are displayed for the dates I have chosen. My requirement here is to not to have wee... See more...
Hi Folks, I have been working on a dashboard that displays result as a timechart grouping by days. I see results are displayed for the dates I have chosen. My requirement here is to not to have weekend data on the dashboard. which I have achieved that by adding the below in the search query. | eval date_wday=lower(strftime(_time,"%A")) |where NOT (date_wday="saturday" OR date_wday="sunday") | fields - date_wday But my question here is how I can achieve this dynamically? Instead of adding this in the query, I should have input button in dashboard which should be used to select 'weekend data needed' or 'not needed' and accordingly result should be populated in Dashboard. Can someone advise on this? Much appreciate for the suggestions provided. Thanks.
Hello Splunkers!   is there any figure(numbering) guide for the  Replication Factor and Search Factor on Index clustering settings?   any official guide documentations?   Thank you in adv... See more...
Hello Splunkers!   is there any figure(numbering) guide for the  Replication Factor and Search Factor on Index clustering settings?   any official guide documentations?   Thank you in advance
Hi , I need the help to write splunk query for calculating CPU Linux load average for last 1,5 and 15 mins. I have splunk TA nix app and collected the metrics vmstat_metric.loadAvg1mi and used this... See more...
Hi , I need the help to write splunk query for calculating CPU Linux load average for last 1,5 and 15 mins. I have splunk TA nix app and collected the metrics vmstat_metric.loadAvg1mi and used this metrics  for last 1 min query.  But I am not sure how to calculate the load average for last 5 and 15 mins. can anyone   
Hello Splunkers!   In my knowledge, mono db is only for the internal uses and able to access with internal Splunk SPL.    Is there any official documentation about this?   Thank you in ad... See more...
Hello Splunkers!   In my knowledge, mono db is only for the internal uses and able to access with internal Splunk SPL.    Is there any official documentation about this?   Thank you in advance.
we have a dashboard that checks endpoint health and creates a message, "Endpoint XYZ is available" The source is a path to a script /u01/splunk/etc/apps/<app_name>/bin/ping.sh Is there a way for ... See more...
we have a dashboard that checks endpoint health and creates a message, "Endpoint XYZ is available" The source is a path to a script /u01/splunk/etc/apps/<app_name>/bin/ping.sh Is there a way for me to read the contents of the script from the search bar? Is it possible to overwrite or append the script from the search bar if I am an app owner? I do not have Splunk server command line access.
Hi Team, I am getting below error while trying to post data to my splunk using below url. I have installed the certificates in the source system by taking them from browser(lock sysmbol) Can you ... See more...
Hi Team, I am getting below error while trying to post data to my splunk using below url. I have installed the certificates in the source system by taking them from browser(lock sysmbol) Can you please check and help what certificates are exactly installed to post data to below URL end point url: https://prd-p-jmw56.splunkcloud.com:8088/services/collector/raw Error Details java.net.ConnectException: java.security.cert.CertificateException: No name matching prd-p-jmw56.splunkcloud.com found, cause: java.security.cert.CertificateException: No name matching prd-p-jmw56.splunkcloud.com found   thanks, Venkat
Hello guys, I would like to have best practices regarding deploying new Splunk cluster V8, could you say if correct and in logical order?   1. Install Splunk on all nodes with non-root user (ex... See more...
Hello guys, I would like to have best practices regarding deploying new Splunk cluster V8, could you say if correct and in logical order?   1. Install Splunk on all nodes with non-root user (except if you want HF), verify ulimits 2. Configure one server "manager" with monitoring console, license master, deployer & deployment server roles 3. Configure Master Node (cluster master) on separate server 4. Configure peers, connect them to the MN 5. Configure search heads, connect them to the MN 6. Configure Universal forwarders   Thanks.
Hi everybody, I need to upgrade Splunk Enterprise from 7.3.X to 8.1.0 and then to 8.2.5 (Windows).  The architecture includes: - 1 cluster master - 1 search head - 2 indexers (cluster)  - 1 d... See more...
Hi everybody, I need to upgrade Splunk Enterprise from 7.3.X to 8.1.0 and then to 8.2.5 (Windows).  The architecture includes: - 1 cluster master - 1 search head - 2 indexers (cluster)  - 1 deployment servers - 1 heavy forwarder - n universal forwarders Looking at the documentation, these are the steps to follow: Download the MSI file to the host. Double-click the MSI file. The installer runs and attempts to detect the existing version of Splunk Enterprise installed on the machine. When it locates the prior installation, it displays a panel that asks you to accept the licensing agreement. Accept the license agreement. The installer then installs the updated Splunk Enterprise. This method of upgrade retains all parameters from the existing installation. The installer restarts Splunk Enterprise services when the upgrade is complete, and places a log of the changes made to configuration files during the upgrade in %TEMP%. Shouldn't I stop the the splunk service before? Do I only need to double click on the installer and follow the wizard on each host? That's it? Is there something that I'm missing?   About Splunk apps and add-ons: I need to update some of them, should I do it before or after the Splunk upgrade? Example: Add-on for VMware ESXi Logs is now 3.4.2 and needs to be upgraded to 4.0.3 (which doesn't support Splunk 7.X). I think I should upgrade Splunk first, then add-ons and apps, correct?   Thanks in advance for any help.
I have a local access to Splunk on my system and I am seeking out means of accessing the API via a C# application. I noticed there was some documentation but no clear direction on data definition or ... See more...
I have a local access to Splunk on my system and I am seeking out means of accessing the API via a C# application. I noticed there was some documentation but no clear direction on data definition or additional routes to accessing the data. Could I get some help with this?