All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There is no objective "best approach" for IaC setup for Splunk Enterprise. I would recommend choosing the one which your engineers are most familiar with, and the one where the pricing structure bes... See more...
There is no objective "best approach" for IaC setup for Splunk Enterprise. I would recommend choosing the one which your engineers are most familiar with, and the one where the pricing structure best fits into your budget.
Could you try clearing your browser cache, and telling us which version of Splunk you are using? Also if you remake this panel in a fresh new dashboard, does it make the same error? 
No you shouldn't need to enter it every time as an input. You can make custom add-on settings which are not username/password and then these will be set once on app configuration and can be re-used f... See more...
No you shouldn't need to enter it every time as an input. You can make custom add-on settings which are not username/password and then these will be set once on app configuration and can be re-used for inputs.    
It does show locked-out users as well as unlocked users. Honestly, I know who is locked out and who is not. I wish it would be stated yes when it is instead of no for everyone. But the real issue I h... See more...
It does show locked-out users as well as unlocked users. Honestly, I know who is locked out and who is not. I wish it would be stated yes when it is instead of no for everyone. But the real issue I have is. How can I know what computer is locked out or if it is off-site? 
can anyone help me with the issue I get from time to time on my dashboard built using splunk dashboard studio: for some reason this error occurs only for maps here's the query | tstats ... See more...
can anyone help me with the issue I get from time to time on my dashboard built using splunk dashboard studio: for some reason this error occurs only for maps here's the query | tstats count from datamodel=Cisco_Security.ASA_Dataset where index IN (add_on_builder_index, adoption_metrics, audit_summary, ba_test, cim_modactions, cisco_duo, cisco_etd, cisco_se, cisco_secure_fw, cisco_sfw_ftd_syslog, cisco_sma, cisco_sna, cisco_xdr, duo, encore, endpoint_summary, fw_syslog, history, ioc, main, mcd, mcd_syslog, new_index_for_endpoint, notable, notable_summary, resource_usage_test_index, risk, secure_malware_analytics, sequenced_events, summary, threat_activity, ubaroute, ueba, whois) ASA_Dataset.event_type_filter_options IN (*) ASA_Dataset.severity_level_filter_options IN (*) ASA_Dataset.src_ip IN (*) ASA_Dataset.direction="outbound" groupby ASA_Dataset.dest | iplocation ASA_Dataset.dest | rename Country as "featureId" | stats count by featureId | geom geo_countries featureIdField=featureId | where isnotnull(geom) after reloading the page the error disappears    
Thanks, I used a similar configuration and now it works.  I had to use the == rather than the = index=websphere websphere_logEventType=* | stats count(websphere_logEventType) BY websphere_logEventT... See more...
Thanks, I used a similar configuration and now it works.  I had to use the == rather than the = index=websphere websphere_logEventType=* | stats count(websphere_logEventType) BY websphere_logEventType | eval websphere_logEventType=case(websphere_logEventType=="I", "INFO",websphere_logEventType=="E", "ERROR", websphere_logEventType=="W", "WARNING", websphere_logEventType=="D", DEBUG, true(),"Not Known" ) | dedup websphere_logEventType
Assuming websphere_logEventType is a string, try something like this | eval websphere_logEventType=case(websphere_logEventType="I", "INFO",websphere_logEventType="E", "ERROR", websphere_logEventType... See more...
Assuming websphere_logEventType is a string, try something like this | eval websphere_logEventType=case(websphere_logEventType="I", "INFO",websphere_logEventType="E", "ERROR", websphere_logEventType="W", "WARNING", websphere_logEventType="D", "DEBUG", true(),"Not Known" ) Otherwise, I, E, W and D are treated as field names (which don't appear to exist, hence the case evaluates to "Not Known")
Hey @NanSplk01 try  ... | eval websphere_logEventType=case(websphere_logEventType=I, "INFO", websphere_logEventType=E, "ERROR", websphere_logEventType=W, "WARNING", websphere_logEventType=D, "DE... See more...
Hey @NanSplk01 try  ... | eval websphere_logEventType=case(websphere_logEventType=I, "INFO", websphere_logEventType=E, "ERROR", websphere_logEventType=W, "WARNING", websphere_logEventType=D, "DEBUG", 1=1, "Not Known") If this Helps, Please Upvote.
From what you have shown so far, if the EventCode is "True", the user is locked out and you set lockout to "Yes", but you haven't shown any events where this is the case. Is this because there are no... See more...
From what you have shown so far, if the EventCode is "True", the user is locked out and you set lockout to "Yes", but you haven't shown any events where this is the case. Is this because there are no events like this?
I am having trouble creating the connection to Splunk Cloud from Power BI. I have downloaded the latest version of the Spunk ODBC (3.1.1), configured it with what I think my user and password is (We... See more...
I am having trouble creating the connection to Splunk Cloud from Power BI. I have downloaded the latest version of the Spunk ODBC (3.1.1), configured it with what I think my user and password is (We authenticate via an Active Directory with our tenant.), and I have access to the access token in the Splunk cloud console. The error I am getting is: Details: "ODBC: ERROR [HY000] [Splunk][SplunkODBC] (40) Error with HTTP API, error code: Timeout was reached ERROR [HY000] [Splunk][SplunkODBC] (40) Error with HTTP API, error code: Timeout was reached" Not sure how else to try configuring the ODBC connector.
@StanD3secI don't think there is one for enterprise yet.  ( Splunk Cloud ACS API has ) But you can use this for splunk enterprise SDKs here. https://dev.splunk.com/enterprise/downloads/ https:... See more...
@StanD3secI don't think there is one for enterprise yet.  ( Splunk Cloud ACS API has ) But you can use this for splunk enterprise SDKs here. https://dev.splunk.com/enterprise/downloads/ https://docs.splunk.com/Documentation/Splunk/9.3.2/RESTREF/RESTlist If this Helps, Please Upvote.
Hello, Thank you for asking your question on the Community. I wanted to see if you were able to find new information or a solution you could share here? If you still need help with this, you can... See more...
Hello, Thank you for asking your question on the Community. I wanted to see if you were able to find new information or a solution you could share here? If you still need help with this, you can contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
This is my search.  I brings back Not Known for every field instead of the correct case name: index=websphere websphere_logEventType=* | stats count(websphere_logEventType) BY websphere_logEventTyp... See more...
This is my search.  I brings back Not Known for every field instead of the correct case name: index=websphere websphere_logEventType=* | stats count(websphere_logEventType) BY websphere_logEventType | eval websphere_logEventType=case(websphere_logEventType=I, "INFO",websphere_logEventType=E, "ERROR", websphere_logEventType=W, "WARNING", websphere_logEventType=D, DEBUG, true(),"Not Known" )   What am I missing that will bring the count and the case that the count is for instead of always the Not Known case?
Hello. I cannot find a solution to this one here... I have logs in one Splunk instance. I've exported them to CSV and want to perform a one-time ingest of that CSV into a new on-prem Splunk Enterpri... See more...
Hello. I cannot find a solution to this one here... I have logs in one Splunk instance. I've exported them to CSV and want to perform a one-time ingest of that CSV into a new on-prem Splunk Enterprise instance.  I have the CSV and can import it. However, I can't figure out how to preserve each row/event's original 'host', timestamp, and 'sourcetype' entry. When I do the import, it records the 'host' as the Splunk indexer, and the timestamp as the date of the import, which makes sense but is not the desired behavior. Here is a sample row of the CSV:   _time,host,index,source,sourcetype 2024-11-19T11:36:05.000-0500,host1.example.com,test-index,/var/log/messages,syslog 2024-11-19T11:36:05.000-0500,host2.example.com,test-index,/var/log/messages,syslog   I removed the _raw column, but I can include it if necessary. How do I import these events while preserving the event time, host, and sourcetype fields? Is this even possible?  I looked around here and can't find anyone with this scenario.  Thank you in advance!  
Hi @Khalid.Rehan, Thank you for sharing the solution! 
_time user desc OU hostName lockout How is this for an example?
Please share anonymised examples of your log events.
SPL does not have conditional execution.  The if function (not a command or statement) is part of where and eval expressions to help determine the value to test or assign to a field. In dashboards, ... See more...
SPL does not have conditional execution.  The if function (not a command or statement) is part of where and eval expressions to help determine the value to test or assign to a field. In dashboards, conditional execution can be simulated by assigning different search commands to a token based on the value of other tokens. <input> ... <condition $token1="-" AND $token2$="-"> <!-- not the correct syntax--> <set token="search">Field3=$token$</set> </condition> <condition> <set token="search">Field11=$token1"</set> </condition> </input> ... <search> <query>index=foo $search$</query> </search> ...
Hi All, link removed, solving this, thanks thanks and best regards, Sekar PS - my karma stats - given 2400+ and received 400+, thanks for reading ! 
That search shows some who are locked out and some people who log in to a device. It shows some of everything. I wish it would determine who is locked out instead of stating no for everything.