All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

 I have a splunk query that does some comparisons and the output is as follows.  If any of the row below for the given hostname has "OK", that host should be marked as "OK" ( irrespective of IP addre... See more...
 I have a splunk query that does some comparisons and the output is as follows.  If any of the row below for the given hostname has "OK", that host should be marked as "OK" ( irrespective of IP addresses it has).  can you help me with the right query pls ?   Hostname IP_Address match esx24 1.14.40.1 missing esx24 1.14.20.1 ok ctx-01 1.9.2.4 missing ctx-01 1.2.1.5 missing ctx-01 1.2.5.26 missing ctx-01 1.2.1.27 missing ctx-01 1.1.5.7 ok ctx-01 1.2.3.1 missing ctx-01 1.2.6.1 missing ctx-01 1.2.1.1 missing w122 1.2.5.15 ok
1] Tried using Until since to pull the no of days between the expirationDateTime and system date, based on token name as we have many token names expirationDateTime eventTimestamp pickupTimesta... See more...
1] Tried using Until since to pull the no of days between the expirationDateTime and system date, based on token name as we have many token names expirationDateTime eventTimestamp pickupTimestamp 2025-07-26T23:00:03+05:30 2024-11-21T17:06:33+05:30 2024-11-21T17:06:33+05:30 Token name AppD can you suggest the query to be used such that we get value in no of days the certificate gets expired
I created a scheduled search that reads 2 input lookup csv files. It returns zero results when I look at the "View Recent"/Job Manager. When I run it by clicking the "Run" selection, I get the result... See more...
I created a scheduled search that reads 2 input lookup csv files. It returns zero results when I look at the "View Recent"/Job Manager. When I run it by clicking the "Run" selection, I get the results that I'm looking for. What am I overlooking? 
  I'm aware about the fact to remove the inputs.conf before installing the TAs collecting the logs on the SHC but if the inputs are still present in the disabled state I'm getting errors like "Unabl... See more...
  I'm aware about the fact to remove the inputs.conf before installing the TAs collecting the logs on the SHC but if the inputs are still present in the disabled state I'm getting errors like "Unable to initialize modular input". Hence, want to understand if the scripts continues running in the backend even if the inputs are in disabled state and throws error or is it something else I'm not aware about?
Hi Team, Can someone guide me how to fetch a word(highlighted ) from below logs AccountMonthendReset - Total number of records reset after monthend:111439411 AccountBalanceMonthendSnapshot - Total... See more...
Hi Team, Can someone guide me how to fetch a word(highlighted ) from below logs AccountMonthendReset - Total number of records reset after monthend:111439411 AccountBalanceMonthendSnapshot - Total number of records in Monthend Cache:111439411 MonthlyCollateralProcessor - compareCollateralStatsData :  statisticData: StatisticData [selectedDataSet=0, rejectedDataSet=0, totalOutputRecords=0, totalInputRecords=0, fileSequenceNum=0, fileHeaderBusDt=null, busD t=10/31/2024, fileName=SETTLEMENT_MONTHEND_COLLATERAL_CONSUMER_CHARGE, totalAchCurrOutstBalAmt=4.57373200875E9, totalAchBalLastStmtAmt=4.57373200875E9, total ClosingBal=4.57373200875E9, sourceName=null, version=1, associationStats={}]  with collateralSum 4.57373200875E9 openingBal 4.53003366393E9 ageBalTot 4.57373200875E9 busDt 10/31/2024 Can someone please guide how to fetch highlighted words
  want to create view like above under dashboard
Hello, could you tell me how to properly have dedicated server certificate for specific tcp-ssl in inputs.conf (Checkpoint) and have another dedicated server certificate for the hf in server.conf, b... See more...
Hello, could you tell me how to properly have dedicated server certificate for specific tcp-ssl in inputs.conf (Checkpoint) and have another dedicated server certificate for the hf in server.conf, both using different sslpassword setting? Both are from same secondary rootCA. Or should we keep single dedicated server certificate on heavyforwarder and only put dedicated Checkpoint certificate on appliance? Thanks.      
Hello Splunkers!! We have events that contains source and destination fields with complete values, and we want to match these fields against event data where the corresponding fields (source and des... See more...
Hello Splunkers!! We have events that contains source and destination fields with complete values, and we want to match these fields against event data where the corresponding fields (source and destination) may include wildcard values in the lookup. The goal is to accurately match the event data with the appropriate lookup values, ensuring that wildcard patterns in the lookup are properly evaluated during the matching process. Values to be match with below lookup. What I have tried so far to match events field values with the lookup field values. But no luck found. Please give me some suggestion to execute this correctly. | lookup movement_type_ah mark_code as mark_code destination as destination source as source OUTPUTNEW movement_type  
Hello All, Is there any official document which can guide to setting up Splunk on AWS Elastic Beanstalk. Thanks.
I have set up email authentication and SMTP using Amazon SES. The test email was successful. I configured the mail server by entering the SMTP ID and password. I created a simple alert, configured ... See more...
I have set up email authentication and SMTP using Amazon SES. The test email was successful. I configured the mail server by entering the SMTP ID and password. I created a simple alert, configured it to trigger in real-time, and set it to send an email. However, the alert is not being generated, and the alert email is not being sent. Is there a way to configure Amazon SES SMTP with Splunk Enterprise's mail server and alert settings to ensure the emails are sent? Thank you!                
I am using Splunk enterprise's DSDL app and can't run any of the examples as I typically end up with this error. [mlspl.MLTKContainer] [get_endpoint_url] Failed to connect to the container I am new... See more...
I am using Splunk enterprise's DSDL app and can't run any of the examples as I typically end up with this error. [mlspl.MLTKContainer] [get_endpoint_url] Failed to connect to the container I am new to Splunk and using Golden Image CPU (5.1.2) for MLTK container with docker backend on local machine to try out the examples in this app.  For example, when trying to run Neural Network Classifier Example, it uses the diabetes_classifier_model but I constantly keep running into above error. I am using container mode as PROD but even DEV doesn't work.
For splunk cloud platform. Where is it documented that splunk cloud logs cannot be changed? 
Hi  I have the below code to produce this table - but does anyone know how to get rid of the part in red (I have added this with paint) - it's just taking up too much real estate on the screen. It i... See more...
Hi  I have the below code to produce this table - but does anyone know how to get rid of the part in red (I have added this with paint) - it's just taking up too much real estate on the screen. It is like an extra line of black that I don't want. Thanks so much in advance   <panel> <title>Process Resources</title> <html depends="$alwaysHideCSSStyle$"> <style> #tableWithHiddenHeader6 th[data-sort-key=label] { width: 40% !important; text-align: left; } #tableWithHiddenHeader6 th[data-sort-key=value] { width: 20% !important; text-align: left; } #tableWithHiddenHeader6 th[data-sort-key=threshold] { width: 20% !important; text-align: left; } #tableWithHiddenHeader6 th[data-sort-key=limit] { width: 20% !important; text-align: left; } #tableWithHiddenHeader6 td { text-align: left; } #tableWithHiddenHeader7 thead{ visibility: hidden; height: min-content; } #tableWithHiddenHeader7 th[data-sort-key=label] { width: 40% !important; } #tableWithHiddenHeader7 th[data-sort-key=value] { width: 20% !important; } #tableWithHiddenHeader7 th[data-sort-key=threshold] { width: 20% !important; } #tableWithHiddenHeader7 th[data-sort-key=limit] { width: 20% !important; } #tableWithHiddenHeader7 td { text-align: left; } #tableWithHiddenHeader8 thead{ visibility: hidden; height: min-content; } #tableWithHiddenHeader8 th[data-sort-key=label] { width: 40% !important; } #tableWithHiddenHeader8 th[data-sort-key=value] { width: 20% !important; } #tableWithHiddenHeader8 th[data-sort-key=threshold] { width: 20% !important; } #tableWithHiddenHeader8 th[data-sort-key=limit] { width: 20% !important; } #tableWithHiddenHeader8 td { text-align: left; } #tableWithHiddenHeader9 thead{ visibility: hidden; height: min-content; } #tableWithHiddenHeader9 th[data-sort-key=label] { width: 40% !important; } #tableWithHiddenHeader9 th[data-sort-key=value] { width: 20% !important; } #tableWithHiddenHeader9 th[data-sort-key=threshold] { width: 20% !important; } #tableWithHiddenHeader9 th[data-sort-key=limit] { width: 20% !important; } #tableWithHiddenHeader9 td { text-align: left; } #tableWithHiddenHeader10 thead{ visibility: hidden; height: min-content; } #tableWithHiddenHeader10 th[data-sort-key=label] { width: 40% !important; } #tableWithHiddenHeader10 th[data-sort-key=value] { width: 20% !important; } #tableWithHiddenHeader10 th[data-sort-key=threshold] { width: 20% !important; } #tableWithHiddenHeader10 th[data-sort-key=limit] { width: 20% !important; } #tableWithHiddenHeader10 td { text-align: left; } </style> </html> <table id="tableWithHiddenHeader6"> <search id="twenty_one"> <done> <set token="tokStatus20">$result.threshold$</set> <set token="tokStatus30">$result.limit$</set> </done> <query>| mstats max("mx.database.space.usage") as value WHERE "index"="murex_metrics" AND "mx.env"="*" AND process.pid ="*" span=10s BY degraded.threshold down.threshold process.pid | rename degraded.threshold as T_CpuPerc | rename down.threshold as limit | sort - _time | head 1 | eval value = round(value,1) | eval label="Database space" | eval threshold=T_CpuPerc | eval limit=limit | table label value threshold limit | appendpipe [ stats count | eval "label"="No results Found" | where count=0 | table "label"]</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5s</refresh> <refreshType>delay</refreshType> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="value"> <colorPalette type="expression">if(value &gt; $tokStatus30$, "$TOKEN_RED$",if(value &gt; $tokStatus20$, "$TOKEN_YELLOW$", "$TOKEN_GREEN$"))</colorPalette> </format> </table> etc,, for each row of the table            
Hi All, I have 300+  Splunk alerts which are pointing to webhook endpoint "A" but soon I have a migration planned for the webhook.  All the 300 + alerts need to be edited so the webhook endpoint po... See more...
Hi All, I have 300+  Splunk alerts which are pointing to webhook endpoint "A" but soon I have a migration planned for the webhook.  All the 300 + alerts need to be edited so the webhook endpoint points to "B" I was wondering if there is an easy way of bulk editing all the alerts rather than doing it individually for each alert.    Thanks.
Hi, I have a problem with this panel. Token $Y$ and $N$ lost values when change between radio input options (Yes/No). The $trobots$ token show the literal value not the token value (Show literl... See more...
Hi, I have a problem with this panel. Token $Y$ and $N$ lost values when change between radio input options (Yes/No). The $trobots$ token show the literal value not the token value (Show literlally $N$ and $N$) <input type="dropdown" token="tintervalo" searchWhenChanged="true"> <label>Intervalo</label> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimasemana">Última semana completa</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimomes">Último mes completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimotrimestre">Último trimestre completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimoaño">Último año completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusomescurso">Mes en curso</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoañoencurso">Año en curso</choice> <choice value="7">Otros</choice> <change> <condition value="7"> <set token="show_timepicker">true</set> <unset token="show_timepicker2"></unset> if($ttime.earliest$=="",<set token="ttime.earliest">-4h@m</set>) if($ttime.latest$=="",<set token="ttime.latest">now</set>) if($trobots$=="",<set token="trobots">`filter_robots` `filter_robots_ip`</set>) <set token="Y">| eval delete=delete</set> <set token="N">`filter_robots` `filter_robots_ip`</set> </condition> <condition> <unset token="show_timepicker"></unset> <set token="show_timepicker2"></set> if($trobots$=="",<set token="trobots">SinBots"</set>) <set token="Y">conBots"</set> <set token="N">sinBots"</set> </condition> <input type="radio" token="trobots" depends="$show_timepicker$" id="inputRadioRI" searchWhenChanged="true"> <label>Robots</label> <choice value="$Y$">Yes</choice> <choice value="$N$">No</choice> <initialValue>$N$</initialValue> </input>
I am current denying chrome and edge processes from being indexed with the following regex   blacklist7 = EventCode="4673" Message="Audit\sFailure[\W\w]+Process\sName:[^\n]+(chrome.exe|msedge.e... See more...
I am current denying chrome and edge processes from being indexed with the following regex   blacklist7 = EventCode="4673" Message="Audit\sFailure[\W\w]+Process\sName:[^\n]+(chrome.exe|msedge.exe)"    This works on majority of the forwarders. However some stragglers still send these events in event though they have the updated inputs deploy on their systems. My work around is to nullqueue the events in transforms.conf in the /etc/system/local directory. I believe this should be working at the forwarder level. Any ideas as to why this is happening.    Some perspective is i have 400 windows machines and only 5 of the systems still send in the events even after a deploy server reload.  
Hi, I have a simple search which is using a lookup definition based off of a lookup. This lookup is large. Search has been using this lookup perfectly fine outputting correct results. Since upgrade... See more...
Hi, I have a simple search which is using a lookup definition based off of a lookup. This lookup is large. Search has been using this lookup perfectly fine outputting correct results. Since upgrade to below version of Splunk Enterprise, this output is not happening like it used to. Matched output is significantly reduced resulting in NULL values for many fields even though lookup is complete and has no issues. I am wondering what has changed that is causing this in below version and how to remediate it? Splunk Enterprise Version: 9.3.1 Build: 0b8d769cb912 index=A sourcetype=B | stats count by XYZ | lookup ABC XYZ as XYZ OUTPUT FieldA, FieldB
We have been given new Connection Strings to enter into our TA-MS-AAD inputs, running on Splunk Cloud's IDM host, pulling from a client's Event Hub.  The feeds were down for several days before we we... See more...
We have been given new Connection Strings to enter into our TA-MS-AAD inputs, running on Splunk Cloud's IDM host, pulling from a client's Event Hub.  The feeds were down for several days before we were given the Strings. The IDM is now connecting to the Event Hub again but no data is flowing; the IDM's logs say "The supplied sequence number '5529741' is invalid. The last sequence number in the system is '4121'" Is there anything we can do about this?
I have deployed a SH cluster and two SHs, every thing is working fine till now.   Now I have added a new member to the cluster, all configurations are replicated. But the apps are not replicated. ... See more...
I have deployed a SH cluster and two SHs, every thing is working fine till now.   Now I have added a new member to the cluster, all configurations are replicated. But the apps are not replicated. Q1: will the apps be replicated on new member automatically or should I run deploy bundle command on deployer.   Q2: when I run the command from deployer, I get network layer error and Splunk service is stopped automatically.
can anyone help me with the issue I get from time to time on my dashboard built using splunk dashboard studio: for some reason this error occurs only for maps here's the query | tstats ... See more...
can anyone help me with the issue I get from time to time on my dashboard built using splunk dashboard studio: for some reason this error occurs only for maps here's the query | tstats count from datamodel=Cisco_Security.ASA_Dataset where index IN (add_on_builder_index, adoption_metrics, audit_summary, ba_test, cim_modactions, cisco_duo, cisco_etd, cisco_se, cisco_secure_fw, cisco_sfw_ftd_syslog, cisco_sma, cisco_sna, cisco_xdr, duo, encore, endpoint_summary, fw_syslog, history, ioc, main, mcd, mcd_syslog, new_index_for_endpoint, notable, notable_summary, resource_usage_test_index, risk, secure_malware_analytics, sequenced_events, summary, threat_activity, ubaroute, ueba, whois) ASA_Dataset.event_type_filter_options IN (*) ASA_Dataset.severity_level_filter_options IN (*) ASA_Dataset.src_ip IN (*) ASA_Dataset.direction="outbound" groupby ASA_Dataset.dest | iplocation ASA_Dataset.dest | rename Country as "featureId" | stats count by featureId | geom geo_countries featureIdField=featureId | where isnotnull(geom) after reloading the page the error disappears