All Topics

Top

All Topics

Hello, could you tell me how to properly have dedicated server certificate for specific tcp-ssl in inputs.conf (Checkpoint) and have another dedicated server certificate for the hf in server.conf, b... See more...
Hello, could you tell me how to properly have dedicated server certificate for specific tcp-ssl in inputs.conf (Checkpoint) and have another dedicated server certificate for the hf in server.conf, both using different sslpassword setting? Both are from same secondary rootCA. Or should we keep single dedicated server certificate on heavyforwarder and only put dedicated Checkpoint certificate on appliance? Thanks.      
Hello Splunkers!! We have events that contains source and destination fields with complete values, and we want to match these fields against event data where the corresponding fields (source and des... See more...
Hello Splunkers!! We have events that contains source and destination fields with complete values, and we want to match these fields against event data where the corresponding fields (source and destination) may include wildcard values in the lookup. The goal is to accurately match the event data with the appropriate lookup values, ensuring that wildcard patterns in the lookup are properly evaluated during the matching process. Values to be match with below lookup. What I have tried so far to match events field values with the lookup field values. But no luck found. Please give me some suggestion to execute this correctly. | lookup movement_type_ah mark_code as mark_code destination as destination source as source OUTPUTNEW movement_type  
Hello All, Is there any official document which can guide to setting up Splunk on AWS Elastic Beanstalk. Thanks.
I have set up email authentication and SMTP using Amazon SES. The test email was successful. I configured the mail server by entering the SMTP ID and password. I created a simple alert, configured ... See more...
I have set up email authentication and SMTP using Amazon SES. The test email was successful. I configured the mail server by entering the SMTP ID and password. I created a simple alert, configured it to trigger in real-time, and set it to send an email. However, the alert is not being generated, and the alert email is not being sent. Is there a way to configure Amazon SES SMTP with Splunk Enterprise's mail server and alert settings to ensure the emails are sent? Thank you!                
I am using Splunk enterprise's DSDL app and can't run any of the examples as I typically end up with this error. [mlspl.MLTKContainer] [get_endpoint_url] Failed to connect to the container I am new... See more...
I am using Splunk enterprise's DSDL app and can't run any of the examples as I typically end up with this error. [mlspl.MLTKContainer] [get_endpoint_url] Failed to connect to the container I am new to Splunk and using Golden Image CPU (5.1.2) for MLTK container with docker backend on local machine to try out the examples in this app.  For example, when trying to run Neural Network Classifier Example, it uses the diabetes_classifier_model but I constantly keep running into above error. I am using container mode as PROD but even DEV doesn't work.
For splunk cloud platform. Where is it documented that splunk cloud logs cannot be changed? 
Hi  I have the below code to produce this table - but does anyone know how to get rid of the part in red (I have added this with paint) - it's just taking up too much real estate on the screen. It i... See more...
Hi  I have the below code to produce this table - but does anyone know how to get rid of the part in red (I have added this with paint) - it's just taking up too much real estate on the screen. It is like an extra line of black that I don't want. Thanks so much in advance   <panel> <title>Process Resources</title> <html depends="$alwaysHideCSSStyle$"> <style> #tableWithHiddenHeader6 th[data-sort-key=label] { width: 40% !important; text-align: left; } #tableWithHiddenHeader6 th[data-sort-key=value] { width: 20% !important; text-align: left; } #tableWithHiddenHeader6 th[data-sort-key=threshold] { width: 20% !important; text-align: left; } #tableWithHiddenHeader6 th[data-sort-key=limit] { width: 20% !important; text-align: left; } #tableWithHiddenHeader6 td { text-align: left; } #tableWithHiddenHeader7 thead{ visibility: hidden; height: min-content; } #tableWithHiddenHeader7 th[data-sort-key=label] { width: 40% !important; } #tableWithHiddenHeader7 th[data-sort-key=value] { width: 20% !important; } #tableWithHiddenHeader7 th[data-sort-key=threshold] { width: 20% !important; } #tableWithHiddenHeader7 th[data-sort-key=limit] { width: 20% !important; } #tableWithHiddenHeader7 td { text-align: left; } #tableWithHiddenHeader8 thead{ visibility: hidden; height: min-content; } #tableWithHiddenHeader8 th[data-sort-key=label] { width: 40% !important; } #tableWithHiddenHeader8 th[data-sort-key=value] { width: 20% !important; } #tableWithHiddenHeader8 th[data-sort-key=threshold] { width: 20% !important; } #tableWithHiddenHeader8 th[data-sort-key=limit] { width: 20% !important; } #tableWithHiddenHeader8 td { text-align: left; } #tableWithHiddenHeader9 thead{ visibility: hidden; height: min-content; } #tableWithHiddenHeader9 th[data-sort-key=label] { width: 40% !important; } #tableWithHiddenHeader9 th[data-sort-key=value] { width: 20% !important; } #tableWithHiddenHeader9 th[data-sort-key=threshold] { width: 20% !important; } #tableWithHiddenHeader9 th[data-sort-key=limit] { width: 20% !important; } #tableWithHiddenHeader9 td { text-align: left; } #tableWithHiddenHeader10 thead{ visibility: hidden; height: min-content; } #tableWithHiddenHeader10 th[data-sort-key=label] { width: 40% !important; } #tableWithHiddenHeader10 th[data-sort-key=value] { width: 20% !important; } #tableWithHiddenHeader10 th[data-sort-key=threshold] { width: 20% !important; } #tableWithHiddenHeader10 th[data-sort-key=limit] { width: 20% !important; } #tableWithHiddenHeader10 td { text-align: left; } </style> </html> <table id="tableWithHiddenHeader6"> <search id="twenty_one"> <done> <set token="tokStatus20">$result.threshold$</set> <set token="tokStatus30">$result.limit$</set> </done> <query>| mstats max("mx.database.space.usage") as value WHERE "index"="murex_metrics" AND "mx.env"="*" AND process.pid ="*" span=10s BY degraded.threshold down.threshold process.pid | rename degraded.threshold as T_CpuPerc | rename down.threshold as limit | sort - _time | head 1 | eval value = round(value,1) | eval label="Database space" | eval threshold=T_CpuPerc | eval limit=limit | table label value threshold limit | appendpipe [ stats count | eval "label"="No results Found" | where count=0 | table "label"]</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5s</refresh> <refreshType>delay</refreshType> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="value"> <colorPalette type="expression">if(value &gt; $tokStatus30$, "$TOKEN_RED$",if(value &gt; $tokStatus20$, "$TOKEN_YELLOW$", "$TOKEN_GREEN$"))</colorPalette> </format> </table> etc,, for each row of the table            
Hi All, I have 300+  Splunk alerts which are pointing to webhook endpoint "A" but soon I have a migration planned for the webhook.  All the 300 + alerts need to be edited so the webhook endpoint po... See more...
Hi All, I have 300+  Splunk alerts which are pointing to webhook endpoint "A" but soon I have a migration planned for the webhook.  All the 300 + alerts need to be edited so the webhook endpoint points to "B" I was wondering if there is an easy way of bulk editing all the alerts rather than doing it individually for each alert.    Thanks.
Hi, I have a problem with this panel. Token $Y$ and $N$ lost values when change between radio input options (Yes/No). The $trobots$ token show the literal value not the token value (Show literl... See more...
Hi, I have a problem with this panel. Token $Y$ and $N$ lost values when change between radio input options (Yes/No). The $trobots$ token show the literal value not the token value (Show literlally $N$ and $N$) <input type="dropdown" token="tintervalo" searchWhenChanged="true"> <label>Intervalo</label> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimasemana">Última semana completa</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimomes">Último mes completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimotrimestre">Último trimestre completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimoaño">Último año completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusomescurso">Mes en curso</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoañoencurso">Año en curso</choice> <choice value="7">Otros</choice> <change> <condition value="7"> <set token="show_timepicker">true</set> <unset token="show_timepicker2"></unset> if($ttime.earliest$=="",<set token="ttime.earliest">-4h@m</set>) if($ttime.latest$=="",<set token="ttime.latest">now</set>) if($trobots$=="",<set token="trobots">`filter_robots` `filter_robots_ip`</set>) <set token="Y">| eval delete=delete</set> <set token="N">`filter_robots` `filter_robots_ip`</set> </condition> <condition> <unset token="show_timepicker"></unset> <set token="show_timepicker2"></set> if($trobots$=="",<set token="trobots">SinBots"</set>) <set token="Y">conBots"</set> <set token="N">sinBots"</set> </condition> <input type="radio" token="trobots" depends="$show_timepicker$" id="inputRadioRI" searchWhenChanged="true"> <label>Robots</label> <choice value="$Y$">Yes</choice> <choice value="$N$">No</choice> <initialValue>$N$</initialValue> </input>
I am current denying chrome and edge processes from being indexed with the following regex   blacklist7 = EventCode="4673" Message="Audit\sFailure[\W\w]+Process\sName:[^\n]+(chrome.exe|msedge.e... See more...
I am current denying chrome and edge processes from being indexed with the following regex   blacklist7 = EventCode="4673" Message="Audit\sFailure[\W\w]+Process\sName:[^\n]+(chrome.exe|msedge.exe)"    This works on majority of the forwarders. However some stragglers still send these events in event though they have the updated inputs deploy on their systems. My work around is to nullqueue the events in transforms.conf in the /etc/system/local directory. I believe this should be working at the forwarder level. Any ideas as to why this is happening.    Some perspective is i have 400 windows machines and only 5 of the systems still send in the events even after a deploy server reload.  
Hi, I have a simple search which is using a lookup definition based off of a lookup. This lookup is large. Search has been using this lookup perfectly fine outputting correct results. Since upgrade... See more...
Hi, I have a simple search which is using a lookup definition based off of a lookup. This lookup is large. Search has been using this lookup perfectly fine outputting correct results. Since upgrade to below version of Splunk Enterprise, this output is not happening like it used to. Matched output is significantly reduced resulting in NULL values for many fields even though lookup is complete and has no issues. I am wondering what has changed that is causing this in below version and how to remediate it? Splunk Enterprise Version: 9.3.1 Build: 0b8d769cb912 index=A sourcetype=B | stats count by XYZ | lookup ABC XYZ as XYZ OUTPUT FieldA, FieldB
We have been given new Connection Strings to enter into our TA-MS-AAD inputs, running on Splunk Cloud's IDM host, pulling from a client's Event Hub.  The feeds were down for several days before we we... See more...
We have been given new Connection Strings to enter into our TA-MS-AAD inputs, running on Splunk Cloud's IDM host, pulling from a client's Event Hub.  The feeds were down for several days before we were given the Strings. The IDM is now connecting to the Event Hub again but no data is flowing; the IDM's logs say "The supplied sequence number '5529741' is invalid. The last sequence number in the system is '4121'" Is there anything we can do about this?
I have deployed a SH cluster and two SHs, every thing is working fine till now.   Now I have added a new member to the cluster, all configurations are replicated. But the apps are not replicated. ... See more...
I have deployed a SH cluster and two SHs, every thing is working fine till now.   Now I have added a new member to the cluster, all configurations are replicated. But the apps are not replicated. Q1: will the apps be replicated on new member automatically or should I run deploy bundle command on deployer.   Q2: when I run the command from deployer, I get network layer error and Splunk service is stopped automatically.
can anyone help me with the issue I get from time to time on my dashboard built using splunk dashboard studio: for some reason this error occurs only for maps here's the query | tstats ... See more...
can anyone help me with the issue I get from time to time on my dashboard built using splunk dashboard studio: for some reason this error occurs only for maps here's the query | tstats count from datamodel=Cisco_Security.ASA_Dataset where index IN (add_on_builder_index, adoption_metrics, audit_summary, ba_test, cim_modactions, cisco_duo, cisco_etd, cisco_se, cisco_secure_fw, cisco_sfw_ftd_syslog, cisco_sma, cisco_sna, cisco_xdr, duo, encore, endpoint_summary, fw_syslog, history, ioc, main, mcd, mcd_syslog, new_index_for_endpoint, notable, notable_summary, resource_usage_test_index, risk, secure_malware_analytics, sequenced_events, summary, threat_activity, ubaroute, ueba, whois) ASA_Dataset.event_type_filter_options IN (*) ASA_Dataset.severity_level_filter_options IN (*) ASA_Dataset.src_ip IN (*) ASA_Dataset.direction="outbound" groupby ASA_Dataset.dest | iplocation ASA_Dataset.dest | rename Country as "featureId" | stats count by featureId | geom geo_countries featureIdField=featureId | where isnotnull(geom) after reloading the page the error disappears    
I am having trouble creating the connection to Splunk Cloud from Power BI. I have downloaded the latest version of the Spunk ODBC (3.1.1), configured it with what I think my user and password is (We... See more...
I am having trouble creating the connection to Splunk Cloud from Power BI. I have downloaded the latest version of the Spunk ODBC (3.1.1), configured it with what I think my user and password is (We authenticate via an Active Directory with our tenant.), and I have access to the access token in the Splunk cloud console. The error I am getting is: Details: "ODBC: ERROR [HY000] [Splunk][SplunkODBC] (40) Error with HTTP API, error code: Timeout was reached ERROR [HY000] [Splunk][SplunkODBC] (40) Error with HTTP API, error code: Timeout was reached" Not sure how else to try configuring the ODBC connector.
This is my search.  I brings back Not Known for every field instead of the correct case name: index=websphere websphere_logEventType=* | stats count(websphere_logEventType) BY websphere_logEventTyp... See more...
This is my search.  I brings back Not Known for every field instead of the correct case name: index=websphere websphere_logEventType=* | stats count(websphere_logEventType) BY websphere_logEventType | eval websphere_logEventType=case(websphere_logEventType=I, "INFO",websphere_logEventType=E, "ERROR", websphere_logEventType=W, "WARNING", websphere_logEventType=D, DEBUG, true(),"Not Known" )   What am I missing that will bring the count and the case that the count is for instead of always the Not Known case?
Hello. I cannot find a solution to this one here... I have logs in one Splunk instance. I've exported them to CSV and want to perform a one-time ingest of that CSV into a new on-prem Splunk Enterpri... See more...
Hello. I cannot find a solution to this one here... I have logs in one Splunk instance. I've exported them to CSV and want to perform a one-time ingest of that CSV into a new on-prem Splunk Enterprise instance.  I have the CSV and can import it. However, I can't figure out how to preserve each row/event's original 'host', timestamp, and 'sourcetype' entry. When I do the import, it records the 'host' as the Splunk indexer, and the timestamp as the date of the import, which makes sense but is not the desired behavior. Here is a sample row of the CSV:   _time,host,index,source,sourcetype 2024-11-19T11:36:05.000-0500,host1.example.com,test-index,/var/log/messages,syslog 2024-11-19T11:36:05.000-0500,host2.example.com,test-index,/var/log/messages,syslog   I removed the _raw column, but I can include it if necessary. How do I import these events while preserving the event time, host, and sourcetype fields? Is this even possible?  I looked around here and can't find anyone with this scenario.  Thank you in advance!  
I'm trying to come up with a search query that ignores parameters if left blank, but ignores other parameters if filled in. In this case, "-" is the default value for token1 and token2. If token1 an... See more...
I'm trying to come up with a search query that ignores parameters if left blank, but ignores other parameters if filled in. In this case, "-" is the default value for token1 and token2. If token1 and token2 are left at this default, I want to find results based only on token3; but if token1 or token2 are specified then I want token3 to be disregarded.   Here's what I've been trying, but so far doesn't seem to be working as I'd hoped: if(($token1$ == "-" AND $token2$ =="-"), (search Field3=$token3$), (search Field1="$token1$" OR Field2="$token2$")) Am I on the right track? Something I'm missing?
Here is what is needed : logLevel : INFO -> Blue logLevel : WARRNING -> Yellow logLevel : ERROR -> Red     Below code is not working for me        <format type="color"> <colorP... See more...
Here is what is needed : logLevel : INFO -> Blue logLevel : WARRNING -> Yellow logLevel : ERROR -> Red     Below code is not working for me        <format type="color"> <colorPalette type="expression"> if(match(value,"logLevel=INFO"),"#4f34eb",null), if(match(value,"logLevel=WARNING"),"#ffff00",null), if(match(value,"logLevel=ERROR"),"#53A051",null) </colorPalette> </format>        is there an options for colors similar to charting?         <option name="charting.chart">line</option> <!--[Total,Critical,Major,Minor,Notice,Healthy]--> <option name="charting.seriesColors">[17202A,C0392B,F5B041,F7DC6F,D5DBDB,3DB42A]</option> <!--[black, red, orange, yellow, grey, green]-->        
I have an employee who keeps getting locked out. I wanted to know how to put a script in to find out which device is getting locked out.