All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Using Splunk Enterprise 8.2.6 and ITSI Version 4.11.5 I am defining entity types and I need to set CPU usage thresholds as follows: >=95 warning >=98 critical The interface only lets me... See more...
Hello, Using Splunk Enterprise 8.2.6 and ITSI Version 4.11.5 I am defining entity types and I need to set CPU usage thresholds as follows: >=95 warning >=98 critical The interface only lets me set "greater than" or "less than": I figured that if I set values to 97.9 and 94.9 then I can configure the thresholds correctly.  Only problem is that this is not permitted even though the metrics themselves are displayed with decimals! This is very problematic oversight because I won't be able to set and monitor any thresholds 100% reliably How can I get around this issue? Thanks! Andrew
Remove field values from one multi-valued field which values are present in another multi-valued field Looking for something like:     | eval dest=mvfilter(if(dest IN email_sender, null(), dest))... See more...
Remove field values from one multi-valued field which values are present in another multi-valued field Looking for something like:     | eval dest=mvfilter(if(dest IN email_sender, null(), dest))     Here dest contains both sender and receiver of the email. hence I'm trying to exclude the sender from it. (FYI, the sender is also a multi-valued field that's because I've used stats before it.)  
Hi I want to filter wineventlogs on universal forwarder with blacklist config. But It doesn't work as described in the document. Why is this not working_?       [WinEventLog://Security] disable... See more...
Hi I want to filter wineventlogs on universal forwarder with blacklist config. But It doesn't work as described in the document. Why is this not working_?       [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" renderXml=false index=wineventlog blacklist7 = EventCode="4624|4625|4634" User="\w+\$"       I just want to filter out usernames endswith $. Happy splunking. 
Hello, What is the procedure for migrating a deployer to a new server? We are running on Linux and are on version 8.1.6 on deployer and search head cluster and we cannot reuse IP-addresses or hostn... See more...
Hello, What is the procedure for migrating a deployer to a new server? We are running on Linux and are on version 8.1.6 on deployer and search head cluster and we cannot reuse IP-addresses or hostnames. I have read https://docs.splunk.com/Documentation/Splunk/8.1.6/DistSearch/BackuprestoreSHC but it's not crystal clear since we are only migrating the deployer and the search head cluster will remain the same. Thanks!
Hi All, Our Search heads are with Splunk Cloud version 8.2.2203.2 and there is a requirement from our application team to use Stream Processor Service that is part of Splunk offering (Ref: https://d... See more...
Hi All, Our Search heads are with Splunk Cloud version 8.2.2203.2 and there is a requirement from our application team to use Stream Processor Service that is part of Splunk offering (Ref: https://docs.splunk.com/Documentation/StreamProcessor/standard/Admin/About) for Wineventlog and IIS logs. Is it something specific we need to purchase as a license? Or will it come with my Splunk Cloud subscription? So when I checked the document it is mentioned as Get access to a tenant and the Stream Processor Service https://docs.splunk.com/Documentation/StreamProcessor/standard/Admin/About#:~:text=Log%20in%20with%20your%20splunk,for%20the%20Stream%20Processor%20Service. So kindly let  me know who will be the Stream Processor Service team? And also it has been mentioned to configure templates and other stuffs so kindly let me know how to proceed further.    
Hey all, I have a summary table that shows these values and there are also some common values.     Process Error  Success Total A 5 5 10 B 6 9 15 A ... See more...
Hey all, I have a summary table that shows these values and there are also some common values.     Process Error  Success Total A 5 5 10 B 6 9 15 A 7 2 9 C 3 8 11 C 1 3 4 B 5 5 10 I want to combine these common values (under Process) and also add the numerical values together. I am hoping for a result like this in my summary table. Process Error  Success Total A 12 7 19 B 11 14 25 C 4 11 15   Any help would be much appreciated. Thanks!  
Hi Splunkers, I have an issue with the use of Data Model, eval command and sourcetype as filter. Let me explain better. Customer asked us to modify the field  action on Data Model Email: if the s... See more...
Hi Splunkers, I have an issue with the use of Data Model, eval command and sourcetype as filter. Let me explain better. Customer asked us to modify the field  action on Data Model Email: if the sourcetype is a particular one, let's say xxx, action must be equal to another field called  final_action Otherwise, the normal behaivor is fine. Now, in the Email Data Model the field action is a calculated one with the following eval expression: if(isnull(action) OR action="","unknown",action) So, I thought to simply modify it in a case expression, adding the check on the sourcetype; based on this, I tested the following search: | from datamodel:"Email" | eval action = case(isnull(action) OR action="","unknown", sourcetype="xxx", final_action, 1=1, action) | stats count values(action) as action by sourcetype  But it does not works; I mean, the field action is correctly filled for all other sourcetypes we have, but the action output field, for sourcetype xxx is empty. My first doubt was: does the problem exists because I used different fields in case function, not equal between them? So I used this search: | from datamodel:"Email" | eval action = if(isnull(action) OR action="","unknown", action) | eval action = if(sourcetype="xxx", final_action, action) | stats count values(action) as action by sourcetype But the action output for sourcetype xxx is still empty. I'm sure that the field is correct and populated because if I use a search without datamodel, comparing 2 different sourcetype we have for mails, the search work fine. For example, if I use: index=* sourcetype IN (xxx, yyy) | eval action=if(sourcetype="xxx", final_action, action) | stats count values(action) as action by sourcetype The outoput is the desiderd one: the action field for yyy is the already exiting one, while for xxx is overwritten with final_action values.
Hi, We are looking for some help on GR (Geo Redundant) Splunk setup. Has anyone already have such an architecture implemented in your/customer environment. Did we follow any reference architectur... See more...
Hi, We are looking for some help on GR (Geo Redundant) Splunk setup. Has anyone already have such an architecture implemented in your/customer environment. Did we follow any reference architectures published by SPLUNK.  Appreciate if you can share some ideas. Thanks in Advance.
Hi, I have a bar chart where I need each bar to represent a different category (each with a different colour), similar to how each section og my pie charts represent a different section: ... See more...
Hi, I have a bar chart where I need each bar to represent a different category (each with a different colour), similar to how each section og my pie charts represent a different section: Here is the XML for my current bar chart? <chart> <search> <query> | inputlookup Migration-Status-McAfee | fillnull value=null | eval "Completion Status"=if('Completion Status'=""," ",'Completion Status') | chart count over "Completion Status" </query> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">top</option> <option name="charting.seriesColors">[0x008000,0xffff00,0xff0000]</option> </chart> Can you please help? Thanks so much!
Hi Team,   We got an requirement to use the "Splunk Secure Gateway" app in our ES- Search Head and our Search head is in Splunk Cloud. Splunk Secure Gateway version is 3.0.9 Splunk Cloud vers... See more...
Hi Team,   We got an requirement to use the "Splunk Secure Gateway" app in our ES- Search Head and our Search head is in Splunk Cloud. Splunk Secure Gateway version is 3.0.9 Splunk Cloud version 8.2.2203.2 We have already provided the Authentication to the Search Head via SAML (Azure) and we have created few groups ess_admin, ess_analyst, ess_user etc and provided authentication to the users and the users are logging into SH via SAML.   So when I navigated to the App" Splunk Secure Gateway" in the Search head it says a message as "SAML needs to be set up for Connected Experiences before devices can be registered" i.e. To configure SAML. Then when i clicked Configure SAML it navigates to the next page and here when I clicked "Connect to a SAML IdP" (Mentioned as Needs Action) so when i clicked the Take Action under Okta or Azure option it has navigated to SAML Groups page. And after which I am not sure what should i need to do and moreover when I tried to create authentication token i am getting an error as below "Token creation failed because: Cannot use tokens for SAML user xxx because neither attribute query requests (AQR) nor scripted auth are supported."   So kindly help me on how to use the app "Splunk Secure Gateway" in our Splunk Cloud Search head.     
I've noticed an issue with the documentation and configuration for DA-ITSI-OS. https://docs.splunk.com/Documentation/ITSI/4.13.1/IModules/OSmoduleconfiguration Firstly, the documentation sugges... See more...
I've noticed an issue with the documentation and configuration for DA-ITSI-OS. https://docs.splunk.com/Documentation/ITSI/4.13.1/IModules/OSmoduleconfiguration Firstly, the documentation suggests that If using Splunk_TA_nix, I should enable metrics inputs with the following: [script://./bin/vmstat.sh] interval = 60 sourcetype = vmstat source = vmstat # index = os disabled = 0 [script://./bin/iostat.sh] interval = 60 sourcetype = iostat source = iostat # index = os disabled = 0 [script://./bin/nfsiostat.sh] interval = 60 sourcetype = nfsiostat source = nfsiostat # index = os disabled = 0 [script://./bin/ps.sh] interval = 30 sourcetype = ps source = ps # index = os disabled = 0 [script://./bin/bandwidth.sh] interval = 60 sourcetype = bandwidth source = bandwidth # index = os disabled = 0 [script://./bin/df.sh] interval = 300 sourcetype = df source = df # index = os disabled = 0 [script://./bin/cpu.sh] sourcetype = cpu source = cpu interval = 30 # index = os disabled = 0 [script://./bin/hardware.sh] sourcetype = hardware source = hardware interval = 36000 # index = os disabled = 0 [script://./bin/version.sh] disabled = false # index = os interval = 86400 source = Unix:Version sourcetype = Unix:Version The problem is, that these are inputs for event metrics and everything else is set up for metrics! In the actual Splunk_TA_nix, the inputs for metrics versions of those scripts have a different stanza, such as cpu_metric df_metric interfaces_metric iostat_metric ps_metric vmstat_metric If I simply change the sourcetype, it breaks the input, so by default, all those metrics based scripts output with the metric name using the _metric suffix. Unfortunately, ALL the ITSI OS module searches are looking for the un suffixed metric names, E.G. cpu, ps, vmstat! If I alter the searches to look for the updated suffixed metric names, I don't get the OS Host Information panel appearing on the entity within the deep dive or entity view. So I don't know how, under the configured searches, any of this will work unless heavily modified, or why the documentation points to event log collection scripts  but the module requires metrics indexes given the use of mstats to search. What am I missing here?
Based on the last row which is "Average", check the values of avg_cpu_utilization and avg_mem_usage and where ever the difference is more then 3 change it's colour or mark it in bold. cluster... See more...
Based on the last row which is "Average", check the values of avg_cpu_utilization and avg_mem_usage and where ever the difference is more then 3 change it's colour or mark it in bold. cluster_name hypervisor_name avg_cpu_utilization avg_mem_usage max_cpu_readiness max_cpu_utilization max_mem_usage Cluster Host1 8.2 29.62 0.18 17.65 29.63 Cluster Host2 5.5 26.41 0.08 14.31 26.42 Cluster Host3 1.7 30.51 0.01 3.48 30.52 Average   3.98 29.61 0.07 9.39 29.62   For Example- if we see avg_cpu_utilization field it's average is 3.98, so it should check all the values in that column (8.2,5.5,1.7) and where ever average difference is more then 3 mark it in bold, so in this case if we compare 3.98 value with other 3 values then for Host1 it is 8.2, which should be marked in bold or colour should be changed for it. Output should be below- cluster_name hypervisor_name avg_cpu_utilization avg_mem_usage max_cpu_readiness max_cpu_utilization max_mem_usage Cluster Host1 8.2 29.62 0.18 17.65 29.63 Cluster Host2 5.5 26.41 0.08 14.31 26.42 Cluster Host3 1.7 30.51 0.01 3.48 30.52 Average   3.98 29.61 0.07 9.39 29.62
I have syslog-ng configuration that started duplicating the events after the Linux box reboot  is there any way to avoid it  ? the are 2 heavy forwarders defined for the same load balancer and on... See more...
I have syslog-ng configuration that started duplicating the events after the Linux box reboot  is there any way to avoid it  ? the are 2 heavy forwarders defined for the same load balancer and only 1 is duplicating the events in the syslog files created    [root@ilissplfwd07 syslog-ng]# cat syslog-ng.conf @version:3.5 @include "scl.conf" # syslog-ng configuration file. # # This should behave pretty much like the original syslog on RedHat. But # it could be configured a lot smarter. # # See syslog-ng(8) and syslog-ng.conf(5) for more information. # # Note: it also sources additional configuration files (*.conf) # located in /etc/syslog-ng/conf.d/ options { flush_lines (0); time_reopen (10); log_fifo_size (1000); chain_hostnames (off); use_dns (no); use_fqdn (no); owner("splunk"); group("splunk"); dir-owner("splunk"); dir-group("splunk"); create_dirs (yes); keep_hostname (yes); };     ## add Default 514 udp/tcp & Filtered based don't modify below line ############################################# # Syslog 514 #source s_syslog { udp(port(514)); tcp(port(514) keep-alive(yes)); }; source s_syslog518 { udp(port(518)); }; source s_syslog1513 { tcp(port(1513) keep-alive(yes)); }; source s_syslog1514 { tcp(port(1514) keep-alive(yes)); }; source s_syslog1515 { tcp(port(1515) keep-alive(yes)); }; source s_syslog1516 { tcp(port(1516) keep-alive(yes)); }; destination d_1513 { file("/splunksyslog/port1513/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog1513); destination(d_1513); };   destination d_1514 { file("/splunksyslog/port1514/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog1514); destination(d_1514); };   destination d_1515 { file("/splunksyslog/port1515/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog1515); destination(d_1515); };   destination d_1516 { file("/splunksyslog/port1516/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog1516); destination(d_1516); };   # destination d_catch { file("/splunksyslog/catch/$HOST/$YEAR-$MONTH-$DAY-$HOUR-catch.log");}; # log { source(s_syslog); destination(d_catch); };   destination d_518 { file("/splunksyslog/port518/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog518); destination(d_518); }; @include "/etc/syslog-ng/conf.d/*.conf"  
I want to combine two search results, whereby I'm only interested in the last x/y events from each subquery. Something like this:     | multisearch [search index="sli-index" | eval testt... See more...
I want to combine two search results, whereby I'm only interested in the last x/y events from each subquery. Something like this:     | multisearch [search index="sli-index" | eval testtype="endp-health" | head 3] [search index="sli-index" | eval testtype="enp-system" | head 6]      This leads to following error: ...Error in 'multisearch' command: Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command).... Any idea how this can be achieved?
Hi Team,                     I need some information on Victoria experience (if possible advantages and disadvantages) to send a clear report to my client providing the all the details why we need t... See more...
Hi Team,                     I need some information on Victoria experience (if possible advantages and disadvantages) to send a clear report to my client providing the all the details why we need to upgrade to Victoria experience from classic experience. I have already gone through the Splunk docs on this. https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/Experience                           But not getting enough solid points where I can add in the report and showcase to client. If anyone has any hands-on experience about the Victoria and classic. Please provide me with your inputs. It will help me alot. Thanks in advance.
I am trying to find the failure rate for individual events.  Each event has a result which is classified as a success or failure.  For this simple run-anywhere example I would like the output to be: ... See more...
I am trying to find the failure rate for individual events.  Each event has a result which is classified as a success or failure.  For this simple run-anywhere example I would like the output to be:  Event              failed_percent open               .50 close               .66666 lift                    .25 |makeresults|eval Event="open", State="success" |append[|makeresults|eval Event="open", State="locked"] |append[|makeresults|eval Event="close", State="blocked"] |append[|makeresults|eval Event="close", State="blocked"] |append[|makeresults|eval Event="lift", State="too heavy"] |append[|makeresults|eval Event="lift", State="success"] |append[|makeresults|eval Event="lift", State="success"] | eval Success=mvfilter(match(State,"success")) | eval Failed=mvfilter(match(State,"locked") OR match(State,"blocked") OR match(State,"too heavy")) | streamstats count(Success) as success_count,count(Failed) as failed_count | eval failed_percent=(failed_count)/(success_count+failed_count) | table Event,success_count,failed_count, failed_percent   This lists each of the 7 events separately and the counts always add to the total, not by event.   I have tried many different ways to achieve this with no success.  I started with the simple search below and ended up with the search above.  I am not sure how to do an eval(count) on the items in Result.  This is obviously not correct SPL, but I tried | eval failure=sum (|where Result="failed").  Plus it would do nothing to group by Event type.   | eval Result=case (like(State,"success"),"success", like(State,"locked"),"failed", like(State,"blocked"),"failed", like(State,"too slow"),"failed", like(State,"too heavy"),"failed", 1=1,"success") | stats count by Result I'm not even sure if this is possible.  I could do it with a separate search for each event type, but I want a single table in the end.  I also thought of doing a lot of joins with different searches, but that seems crazy. Thanks you your help! Using  Splunk 8.1.6
Hi, We've setup our Splunk instances to use SAML for signon, but are having difficulty setting a time an automatic inactivity logout. I've configured it to be 5m in both web.conf and server.conf bu... See more...
Hi, We've setup our Splunk instances to use SAML for signon, but are having difficulty setting a time an automatic inactivity logout. I've configured it to be 5m in both web.conf and server.conf but still don't get an automatic logout. It does seem to logout automatically every 24hours (not consistently), but when this happens Splunk redirects to the IDP which then redirects back to Splunk with a new SAML token. This happens without the IDP even asking for login details from the user.   Any help would be appreciated 
Hi Team, I am ingesting job logs to SPlunk and below is one of the job log (job ran on 27th June) which was ingested with wrong _time value to SPlunk. Job log: (14.2) 06-27-22 10:31:03 (35312:2... See more...
Hi Team, I am ingesting job logs to SPlunk and below is one of the job log (job ran on 27th June) which was ingested with wrong _time value to SPlunk. Job log: (14.2) 06-27-22 10:31:03 (35312:24804) PRINTFN: 2022.06.25 (14.2) 06-27-22 10:31:10 (35312:24804) JOB: Job <ALERTS_MORNING> is completed successfully. As the job ran on 27th June, the _time value in Splunk is showing as 25th June (hope it is derived from printfn in the logs). The date_mday field under _time is showing as 25 instead of 27. Can someone help in how _time is derived (it is the ingested timestamp, but in this case it was calculated wrongly) and how to dervie correct ingested timestamp. Regards, Karthikeyan.SV