All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Does anyone know of a risk assessment done for apps like the Cisco SNA addon Cisco Secure Network Analytics (Stealthwatch) App for Splunk Enterprise | Splunkbase that require all users to have list_s... See more...
Does anyone know of a risk assessment done for apps like the Cisco SNA addon Cisco Secure Network Analytics (Stealthwatch) App for Splunk Enterprise | Splunkbase that require all users to have list_storage_passwords capability?  Does this capability mean that users (once authenticated) could craft a request that would provide them with a sensitive password in plaintext? Thanks
Hi All, I have following Query  index=wineventlog |eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S") |eval device_name = lower(Workstation_Name)|dedup device_name | table _time user device_name src... See more...
Hi All, I have following Query  index=wineventlog |eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S") |eval device_name = lower(Workstation_Name)|dedup device_name | table _time user device_name src_nt_host action ComputerName host SourceName Account_Name Security_ID Logon_Type TaskCategory Type app eventtype product vendor vendor_product Account_Domain dest dest_nt_domain dest_nt_host Error_Code EventCode EventType name source SourceName sourcetype src src_domain src_ip src_nt_domain src_port Virtual_Account LogName Logon_GUID Impersonation_Level on Yesterday time filter This search takes more than one hour and when I use this query to output search It process till 60% and then it is giving error like search auto-cancelled. Is there any way that we can handle time for processing this query. or how can I get data in other ways.    If I give shorted timeframe like last 60 min time takes almost 5 min and I can get data. Please suggest.
Here is the situation Search web security appliance data (index=network sourcetype=cisco_wsa_squid) for non-business activity, i.e., usage values other than Business (usage!=Business) during the pr... See more...
Here is the situation Search web security appliance data (index=network sourcetype=cisco_wsa_squid) for non-business activity, i.e., usage values other than Business (usage!=Business) during the previous business week. And here is query i got for it index=network sourcetype=cisco_wsa_squid (usage!=Business) earliest=-7d@w1 latest=@w6. Could someone explain in latest why is it @w6 and not -7d@w6,  @w6  will not include current week's data ? #timemodiefiers
I am trying to run eval command to pull some stats but its erroring out as below  Error in 'timechart' command: The eval expression has no fields: 'avg(properties.elapsed)'. <search query>|timechar... See more...
I am trying to run eval command to pull some stats but its erroring out as below  Error in 'timechart' command: The eval expression has no fields: 'avg(properties.elapsed)'. <search query>|timechart span=mon eval(round(avg(properties.elapsed),2)) as AverageResponsetime where it was working perfectly fine in splunk enterprise system 
Hey everyone, I am currently trying to write a search that monitors outgoing E-Mail traffic. The goal is to see if business-relevant information is being exfiltrated via E-Mail. Since I am new to wr... See more...
Hey everyone, I am currently trying to write a search that monitors outgoing E-Mail traffic. The goal is to see if business-relevant information is being exfiltrated via E-Mail. Since I am new to writing SPL I tried the following: First, I wanted to write a simple search that would show me all E-Mails where the size of the E-Mail is exceeding a set threshold. That's what I came up with: | datamodel Email search | search All_Email.src_user="SOMETHING I USE TO MAKE SURE THE TRAFFIC IS GOING FROM INTERNAL TO EXTERNAL" AND sourcetype="fml:*" | stats  values(_time) as _time  values(All_Email.src_user) as src_user  values(All_Email.recipient) as recipient  values(All_Email.file_name) as file_name  values(All_Email.subject) as subject  values(All_Email.size) as size  by All_Email.message_id | eval size_MB=round(size/1000000,3) | `ctime(alert_time)` | where 'size_MB'>X | fields - size As far as I can see, it does what I initially wanted it to do. Upon further testing and thinking, I noticed a flaw. If Data is exfiltrated over a given time through many different E-Mails, that search would not trigger since the threshold X would not be exceeded in one E-Mail. That's why I wanted to write a new Search using tstats (since the above search was pretty slow) where the traffic from A to the same recurring recipient is being added up in a given time period. If the accumulated traffic would exceed a given threshold, the search would trigger. I then came up with this: | tstats min(_time) as alert_time max(_time) as end_time values(All_Email.file_name) as file_name values(All_Email.subject) as subject values(All_Email.size) as size from datamodel=Email WHERE All_Email.src_user="SOMETHING I USE TO MAKE SURE THE TRAFFIC IS GOING FROM INTERNAL TO EXTERNAL" AND sourcetype="fml:*" by All_Email.src_user, All_Email.recipient | eval size_MB=round(size/1000000,3) This search is not finished (threshold missing, etc.) since I noticed that an E-Mail with multiple attachments does not calculate the size correctly. It lists all the sizes of the different attachments but does not calculate a sum. I think the "by All_Email.src_user, All_Email.recipient" statement does not work as I intended it to. I would be happy to get some feedback on how to improve. Maybe the Code I wrote is way to complicated or does not work as it's supposed to.  Since I am new to writing SPL, are there any standards on how to write clean SPL or any resources where I can study many different (good) searches so that I can improve in writing my own searches? I would appreciate any form of help! Thank you very much!
How can I leverage Splunk Cloud to: Monitor System Health & Performance – Track uptime, downtime, and resource utilization (CPU/memory) of essential infrastructure. Enhance Endpoint & Network Secu... See more...
How can I leverage Splunk Cloud to: Monitor System Health & Performance – Track uptime, downtime, and resource utilization (CPU/memory) of essential infrastructure. Enhance Endpoint & Network Security – Analyze firewall activity, VPN connections, and endpoint protection status. Utilize UEBA – Identify unusual user behavior that may signal insider threats or compromised accounts. Visualize Threat Response Metrics – Build dashboards to track the time taken for threat detection, investigation, and resolution. Analyze Cyberattack Patterns – Create dashboards to identify attack sources, detect trends, and refine mitigation strategies.  
Hi, I am working on installing CA-signed (ssl.com) cert to a splunk enterprise instance, and keep hitting these two errors: 03-18-2025 23:32:08.751 +0000 ERROR UiHttpListener [122666 WebuiStartup] ... See more...
Hi, I am working on installing CA-signed (ssl.com) cert to a splunk enterprise instance, and keep hitting these two errors: 03-18-2025 23:32:08.751 +0000 ERROR UiHttpListener [122666 WebuiStartup] - TLS certificate is missing or invalid, please check your configuration or certificate file. 03-18-2025 23:32:08.751 +0000 ERROR UiHttpListener [122666 WebuiStartup] - Loaded TLS configurations from conf file=web, TLS cert check failed   web.conf: [settings] mgmtHostPort = 0.0.0.0:8089 enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/splunk.key serverCert = /opt/splunk/etc/auth/mycerts/splunk.crt   crt file contains the server cert, as well as CA chain concatenated at the end of the file. Cert file is valid: [root@splunk mycerts]# openssl x509 -in splunk.crt -noout -enddate notAfter=Jun 16 19:25:41 2025 GMT openssl verify -CAfile splunk.ca-bundle splunk.crt splunk.crt: OK How exactly does splunk perform "TLS cert", and is there a debug method to figure out what exactly it does not like about the CA-signed cert I am trying to configure?   Permissions and cert file ownership are setup correctly (ie, 600/644 and splunk:splunk)   Thank you!
I have an alert saved that is straight forward. The search is: index=mydata action=block I have it on a cron schedule and I get results from it when manually running the searching. I can see the ... See more...
I have an alert saved that is straight forward. The search is: index=mydata action=block I have it on a cron schedule and I get results from it when manually running the searching. I can see the field asset.name is returned and it has the expected data I want in it. I configure my alert action to email me and in the body I put in $result.asset.name$.  When the email is received, it is a blank email. For troubleshooting, I tried a different field named 'id', and put in $result.id$ and $result.asset.name$ in the body of the email alert action. The id data shows up but not the asset.name. I changed my search to have |table asset.name at the end and I again see the data I want in a manual search. I tried adding an |eval dvc='asset.name' to my search and again I see dvc now has the data I want in it. But if I put $result.dvc$ in the email body, I again get a blank email. Please help me to understand what I'm doing wrong. Thanks
Login failed Your license is expired. Please login as an administrator to update the license.
We are transitioning from getting the HEC data through HFs to getting it directly to the indexers and we are wondering if upon introducing a new data source are we forced to do an indexer rolling res... See more...
We are transitioning from getting the HEC data through HFs to getting it directly to the indexers and we are wondering if upon introducing a new data source are we forced to do an indexer rolling restart. 
I'm trying to have the dashboard return all results if the text field is * or return all phone numbers with a partial search in the text box. <input type="text" token="PhoneNumber" searchWhenCha... See more...
I'm trying to have the dashboard return all results if the text field is * or return all phone numbers with a partial search in the text box. <input type="text" token="PhoneNumber" searchWhenChanged="true"> <label>Phone number to search</label> <default>*</default> </input> The search on the dashboard panel: index="cellulardata" | where if ($PhoneNumber$ ="*", (like('Wireless number and descriptions',"%"),like('Wireless Number and descriptions',"%$phonenumber$%" ) | eval Type=if(like(lower('Charge description'), "%text%") OR like(lower('Charge description'), "%ict%"), "Text", "Voice") | eval Direction=if (Type="Voice" AND 'Called city_state' = "INCOMING,CL","Incoming","Outgoing") | eval datetime =Date." ".Time | eval _time=strptime (datetime,"%m/%d/%Y %H:%M") | eval DateTime=strftime(_time, "%m/%d/%y %I:%M %p") | eval To_from=replace (To_from,"\.","") | table DateTime, "Wireless number and descriptions", To_from, Type, Direction |rename "Wireless number and descriptions" as Number | sort -DateTime   The query  returns no results no matter if the text field is empty or not.    I've removed the entry below from the search,  so I know the rest of the search works: where if ($PhoneNumber$ ="*", (like('Wireless number and descriptions',"%"),like('Wireless Number and descriptions',"%$phonenumber$%" I've tried comparing this to other dashboards I've seen and searching google,  but no luck for some reason.    
We are in a transition from sending the data through HFs to sending the data directly to the indexers and we wonder how to configure the load balancer to handle this HTTP data. My understanding is th... See more...
We are in a transition from sending the data through HFs to sending the data directly to the indexers and we wonder how to configure the load balancer to handle this HTTP data. My understanding is that HTTP is based on TCP and TCP is connection based and therefore we can lock the sender to a particular indexer which would lead to an uneven distribution of the load, any suggestions?
For our indexers, we see the following under 'Storage I/O Saturation (Mount Point)' -  0.90% (/opt/splunk) 6.56% (/indexing/splunk_cold)  We have 14 indexers with roughly the same saturation leve... See more...
For our indexers, we see the following under 'Storage I/O Saturation (Mount Point)' -  0.90% (/opt/splunk) 6.56% (/indexing/splunk_cold)  We have 14 indexers with roughly the same saturation levels and I wonder if it's healthy.  We would like to direct the HEC data straight to the indexers (instead of going through the HFs) and therefore I wonder if at the I/O level we are ready.
I am reviewing a previously created lookup that is based on a KV-store collection. There is a custom script (contained in a custom kvstore app) on an HF that pulls data into a file.csv, processes th... See more...
I am reviewing a previously created lookup that is based on a KV-store collection. There is a custom script (contained in a custom kvstore app) on an HF that pulls data into a file.csv, processes the file changes, and then updates a kvstore collection.    My question is "how do I verify this collection (i.e. FooBar) is being replicated to the Search Head Cluster?" The collections.conf on the HF shows  [FooBar] replicate=true  field.<something1> =string  field.<something2> =string field.<something3> =string  The same collections.conf is on the SHC (in /opt/splunk/etc/apps/kv_store_app/local ) probably created via the WebUI lookup setting page... it says only  [FooBar] disabled=0 when I run  " | inputlookup FooBar " on both HF and the SHC members, the results are different, so appears to be out of sync or broken. Any advise or references appreciated for this scenario. Thank you
I have a Splunk Classic Dashboard. I have a Table Panel at the top of the dashboard that has Top Critical Alerts with the Rule Title in the left column and the number in the right column. I set a dri... See more...
I have a Splunk Classic Dashboard. I have a Table Panel at the top of the dashboard that has Top Critical Alerts with the Rule Title in the left column and the number in the right column. I set a drilldown for this table: <drilldown> <set token="rule_token">$click.name$</set> </drilldown> Later on, I have an Event Panel that has a Multiselect: <input type="multiselect" token="rule_token" searchWhenChanged="true"> <label>Rule</label> <choice value="*">All Rules</choice> <default>*</default> <fieldForLabel>RuleTitle</fieldForLabel> <fieldForValue>RuleTitle</fieldForValue> <search> <query>| tstats count where index=index by RuleTitle</query> </search> <prefix>RuleTitle IN (</prefix> <delimiter>,</delimiter> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> I want the token set in the multiselect to be changed by the drilldown from the Critical Alerts table. For example, if I select the value "Defender Alert" in the Critical Hits table, I want the rule_token in the multiselect to change to Defender Alert. How can I get this to happen?
Hello appd experts, I've installed a db agent collector using windows authentication as per image below to connect to MS SQL Server. When starting the dbagent collector facing the issue below: ... See more...
Hello appd experts, I've installed a db agent collector using windows authentication as per image below to connect to MS SQL Server. When starting the dbagent collector facing the issue below: 18 Mar 2025 14:02:14,194 ERROR [DBAgent-4] ADBMonitorConfigResolver: - [SQL_MES_TEST] Failed to resolve DB topological structure. com.microsoft.sqlserver.jdbc.SQLServerException: This driver is not configured for integrated authentication. ClientConnectionId:84b24ba8-aae7-4084-a050-0de9e2f2ffea at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:3145) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.AuthenticationJNI.<init>(AuthenticationJNI.java:72) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3937) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(SQLServerConnection.java:85) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:3926) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7375) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3200) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2707) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:2356) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:2207) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:1270) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:861) ~[mssql-jdbc-8.4.0.jre8.jar:?] at java.sql.DriverManager.getConnection(DriverManager.java:682) ~[java.sql:?] at java.sql.DriverManager.getConnection(DriverManager.java:191) ~[java.sql:?] at com.singularity.ee.agent.dbagent.collector.db.relational.DriverManager.getConnection(DriverManager.java:58) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.singularity.ee.agent.dbagent.collector.db.relational.DriverManager.getConnectionWithEvents(DriverManager.java:74) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.appdynamics.dbmon.dbagent.task.resolver.ARelationalDBMonitorConfigResolver.initConnection(ARelationalDBMonitorConfigResolver.java:150) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.appdynamics.dbmon.dbagent.task.resolver.MSSQLDBMonitorConfigResolver.resolveTopology(MSSQLDBMonitorConfigResolver.java:118) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.appdynamics.dbmon.dbagent.task.resolver.ADBMonitorConfigResolver.resolveTopologicalStructure(ADBMonitorConfigResolver.java:124) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.appdynamics.dbmon.dbagent.task.resolver.ADBMonitorConfigResolver.run(ADBMonitorConfigResolver.java:205) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.singularity.ee.util.javaspecific.scheduler.AgentScheduledExecutorServiceImpl$SafeRunnable.run(AgentScheduledExecutorServiceImpl.java:122) ~[agent-25.1.0-1223.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) ~[?:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask$Sync.innerRunAndReset(ADFutureTask.java:335) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask.runAndReset(ADFutureTask.java:152) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.access$101(ADScheduledThreadPoolExecutor.java:128) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.runPeriodic(ADScheduledThreadPoolExecutor.java:215) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.run(ADScheduledThreadPoolExecutor.java:253) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.runTask(ADThreadPoolExecutor.java:694) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.run(ADThreadPoolExecutor.java:726) ~[agent-25.1.0-1223.jar:?] at java.lang.Thread.run(Thread.java:1575) ~[?:?] Caused by: java.lang.UnsatisfiedLinkError: no mssql-jdbc_auth-8.4.0.x64 in java.library.path: C:\dbagent\db-agent-25.1.0.4748\auth\x64 at java.lang.ClassLoader.loadLibrary(ClassLoader.java:2442) ~[?:?] at java.lang.Runtime.loadLibrary0(Runtime.java:916) ~[?:?] at java.lang.System.loadLibrary(System.java:2066) ~[?:?] at com.microsoft.sqlserver.jdbc.AuthenticationJNI.<clinit>(AuthenticationJNI.java:51) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3936) ~[mssql-jdbc-8.4.0.jre8.jar:?] ... 27 more I tried to use the latest dll file(below) from microsoft site within auth\x64 directory. It won't help either. mssql-jdbc_auth-12.8.1.x64.dll Any ideas how to solve this issue? If I use a credential as username and password works fine. The idea here is the customer only wants to create one user for all hosts(servers) using windows authentication. Not creating one user for each server. Imagine to create this user to all servers with different applications.
Hello Everyone, i have a dataset where I'm generating a column of number of servers per day.  using a timechart command for last 2 weeks. earliest="-14d" latest=now() index=*...... | timechart span... See more...
Hello Everyone, i have a dataset where I'm generating a column of number of servers per day.  using a timechart command for last 2 weeks. earliest="-14d" latest=now() index=*...... | timechart span=1d dc(hostname) as servercount My requirement is to split the data in previous week and latest week and get the maximum count from the previous week and the latest week   
Hello Team, I need to run anomaly command on the top of results returned by the lookup. My lookup is geo: enriching my events with Country and City based on IP address. Then i need to use Country a... See more...
Hello Team, I need to run anomaly command on the top of results returned by the lookup. My lookup is geo: enriching my events with Country and City based on IP address. Then i need to use Country and City as input for anomaly command. But that does not work, because of streaming. this is working fine: index=mydata | rename public_ip as ip  | lookup maxmind_secone_lookup ip OUTPUT country, region, city  For this: index=mydata | rename public_ip as ip  | lookup maxmind_secone_lookup ip OUTPUT country, region, city | anomalies threshold=0.03 by field1, field2, country  I am getting error: Streamed search execute failed because: Error in 'lookup' command: Script execution failed for external search command '/opt/splunk/var/run/searchpeers/splunk-1742240975/apps/myapp/bin/geoip_max_lookup.py'.. How to fix it ? I need to run anomalies based on geo enriched country. Thanks,
Hello I have this search | inputlookup defender_onboard.csv | fillnull value=NA | search Region="***" 4LetCode="*" | search NOT [inputlookup ex_sou.csv| fields DeviceName] | search NOT [inpu... See more...
Hello I have this search | inputlookup defender_onboard.csv | fillnull value=NA | search Region="***" 4LetCode="*" | search NOT [inputlookup ex_sou.csv| fields DeviceName] | search NOT [inputlookup ex_defender.csv | fields DeviceName] | table DeviceName Region DeviceType OSType OSVersion now i am getting this result i want region to be expanded to get individual row.
There was a time when the indexer server shut down unexpectedly,  And I've been struggle with indexer clustering rf & sf were doesn't meet. Every index are satisfied with rf & sf, but only one inde... See more...
There was a time when the indexer server shut down unexpectedly,  And I've been struggle with indexer clustering rf & sf were doesn't meet. Every index are satisfied with rf & sf, but only one index doesn't meet sf & rf   I have tried roll / resync / rolling restart in the Master node, but it doesn't work.   I'm trying to find the error bucket and remove it from the CLI environment, and restart the cluster. Is it right solution to solve this problem??  Or Suggest me the better way to solve it. please