All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@livehybrid  Thank you for the sanity check, I was not understanding how this would have worked, I was not familiar if that sort of replication was even possible (HF to SHC). Apparently some othe... See more...
@livehybrid  Thank you for the sanity check, I was not understanding how this would have worked, I was not familiar if that sort of replication was even possible (HF to SHC). Apparently some other mechanism (to transfer the file.csv to the SHC) is missing, broken, or unknown at this time. I will keep you posted Thanks
We are in a transition from sending the data through HFs to sending the data directly to the indexers and we wonder how to configure the load balancer to handle this HTTP data. My understanding is th... See more...
We are in a transition from sending the data through HFs to sending the data directly to the indexers and we wonder how to configure the load balancer to handle this HTTP data. My understanding is that HTTP is based on TCP and TCP is connection based and therefore we can lock the sender to a particular indexer which would lead to an uneven distribution of the load, any suggestions?
For our indexers, we see the following under 'Storage I/O Saturation (Mount Point)' -  0.90% (/opt/splunk) 6.56% (/indexing/splunk_cold)  We have 14 indexers with roughly the same saturation leve... See more...
For our indexers, we see the following under 'Storage I/O Saturation (Mount Point)' -  0.90% (/opt/splunk) 6.56% (/indexing/splunk_cold)  We have 14 indexers with roughly the same saturation levels and I wonder if it's healthy.  We would like to direct the HEC data straight to the indexers (instead of going through the HFs) and therefore I wonder if at the I/O level we are ready.
I was able to eliminate the _time column by changing the event tag to a table tag. <table> <search> <query>index=_internal | fields - _time | table ApplicationName, Appli... See more...
I was able to eliminate the _time column by changing the event tag to a table tag. <table> <search> <query>index=_internal | fields - _time | table ApplicationName, ApplicationPath, LastRun</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="refresh.display">progressbar</option> <fields>["ApplicationName","ApplicationPath","LastRun"]</fields> </table>  
Hi @Glasses2  I think there may be some confusion here, the replication of a KV Store collection is only between members of a searchhead cluster (SHC) - it isnt possible to natively replicate KV Sto... See more...
Hi @Glasses2  I think there may be some confusion here, the replication of a KV Store collection is only between members of a searchhead cluster (SHC) - it isnt possible to natively replicate KV Stores from a HF to SHC, this would need further architecting with additional scripts (there may be apps which can do this?) to allow this kind of replication to occur.  Check out the "KV Store Tools Redux" app (https://splunkbase.splunk.com/app/5328) as this has the ability to push KV stores to remote instances and might solve your requirement. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @luizpolli  The java process is specifically looking for mssql-jdbc_auth-8.4.0.x64.dll rather than the higher version you have, I would suggest downloading and installing the specific 8.4.0 versi... See more...
Hi @luizpolli  The java process is specifically looking for mssql-jdbc_auth-8.4.0.x64.dll rather than the higher version you have, I would suggest downloading and installing the specific 8.4.0 version of the DLL as I believe this should resolve the issue. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I am reviewing a previously created lookup that is based on a KV-store collection. There is a custom script (contained in a custom kvstore app) on an HF that pulls data into a file.csv, processes th... See more...
I am reviewing a previously created lookup that is based on a KV-store collection. There is a custom script (contained in a custom kvstore app) on an HF that pulls data into a file.csv, processes the file changes, and then updates a kvstore collection.    My question is "how do I verify this collection (i.e. FooBar) is being replicated to the Search Head Cluster?" The collections.conf on the HF shows  [FooBar] replicate=true  field.<something1> =string  field.<something2> =string field.<something3> =string  The same collections.conf is on the SHC (in /opt/splunk/etc/apps/kv_store_app/local ) probably created via the WebUI lookup setting page... it says only  [FooBar] disabled=0 when I run  " | inputlookup FooBar " on both HF and the SHC members, the results are different, so appears to be out of sync or broken. Any advise or references appreciated for this scenario. Thank you
Hi @lcguilfoil  To achieve this you can use the following  <drilldown> <eval token="form.rule_token">mvappend($form.rule_token$,$click.value$)</eval> </drilldown> Ive updated my previous answer ... See more...
Hi @lcguilfoil  To achieve this you can use the following  <drilldown> <eval token="form.rule_token">mvappend($form.rule_token$,$click.value$)</eval> </drilldown> Ive updated my previous answer to include this but also see below for working example: <form version="1.1" theme="light"> <label>AnswersTesting</label> <fieldset submitButton="false"> <input type="multiselect" token="rule_token" searchWhenChanged="true"> <label>Rule</label> <choice value="*">All Rules</choice> <default>*</default> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query>| tstats count where index=_internal by host</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <prefix>host IN (</prefix> <delimiter>,</delimiter> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> </fieldset> <row> <panel> <table> <search> <query>|tstats count where index=_internal by host</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <drilldown> <eval token="form.rule_token">mvappend($form.rule_token$,$click.value$)</eval> </drilldown> </table> </panel> </row> </form> Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I'm using a Classic Dashboard. This is my XML: <event> <search> <query>index=index | fields - _time | table ApplicationName, ApplicationPath, LastRun</query> </search> <fields>ApplicationN... See more...
I'm using a Classic Dashboard. This is my XML: <event> <search> <query>index=index | fields - _time | table ApplicationName, ApplicationPath, LastRun</query> </search> <fields>ApplicationName, ApplicationPath, LastRun</fields> <option name="type">table</option> </event>
Please share your query so we can see how you're creating the column header.  I suspect you need to remove "_time" from the table command.
Hi, This removes the values of _time, but not the column header for me. Any ideas?
Thank you! That works! Now, is there a way to select multiple of the values in the table and have multiple values set with the token? So, for example, if the Rules in the Critical Table are "Defender... See more...
Thank you! That works! Now, is there a way to select multiple of the values in the table and have multiple values set with the token? So, for example, if the Rules in the Critical Table are "Defender Alert" and "Antivirus Hacktool Detected", and I click on both of them, is there a way to have these both assigned to the token rule_token and appear in the multiselect? Please let me know if that makes sense!
@Na_Kang_Lim  Yes, you can definitely use props.conf and transforms.conf to scale this broader and make your field extractions more manageable.
Try it like this <drilldown> <set token="form.rule_token">$click.name$</set> </drilldown>
Hi @lcguilfoil  You need to use "form.rule_token" in the set token like this: <set token="form.rule_token">$click.value$</set>   Updated:  If you want to append to existing selections then use: ... See more...
Hi @lcguilfoil  You need to use "form.rule_token" in the set token like this: <set token="form.rule_token">$click.value$</set>   Updated:  If you want to append to existing selections then use: <eval token="form.rule_token">mvappend($form.rule_token$,$click.value$)</eval> Here is a full example to demonstrate if it helps <form version="1.1" theme="light"> <label>AnswersTesting</label> <fieldset submitButton="false"> <input type="multiselect" token="rule_token" searchWhenChanged="true"> <label>Rule</label> <choice value="*">All Rules</choice> <default>*</default> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query>| tstats count where index=_internal by host</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <prefix>host IN (</prefix> <delimiter>,</delimiter> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> </fieldset> <row> <panel> <table> <search> <query>|tstats count where index=_internal by host</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <drilldown> <eval token="form.rule_token">mvappend($form.rule_token$,$click.value$)</eval> </drilldown> </table> </panel> </row> </form> Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I have a Splunk Classic Dashboard. I have a Table Panel at the top of the dashboard that has Top Critical Alerts with the Rule Title in the left column and the number in the right column. I set a dri... See more...
I have a Splunk Classic Dashboard. I have a Table Panel at the top of the dashboard that has Top Critical Alerts with the Rule Title in the left column and the number in the right column. I set a drilldown for this table: <drilldown> <set token="rule_token">$click.name$</set> </drilldown> Later on, I have an Event Panel that has a Multiselect: <input type="multiselect" token="rule_token" searchWhenChanged="true"> <label>Rule</label> <choice value="*">All Rules</choice> <default>*</default> <fieldForLabel>RuleTitle</fieldForLabel> <fieldForValue>RuleTitle</fieldForValue> <search> <query>| tstats count where index=index by RuleTitle</query> </search> <prefix>RuleTitle IN (</prefix> <delimiter>,</delimiter> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> I want the token set in the multiselect to be changed by the drilldown from the Critical Alerts table. For example, if I select the value "Defender Alert" in the Critical Hits table, I want the rule_token in the multiselect to change to Defender Alert. How can I get this to happen?
The dll solution did not work for me. I'm still facing the issue.
@kiran_panchavatHave you actually read the question? @MichalG1 explicitly wrote that the lookup does work when it's the last component in the search. If its results are further processed down the sea... See more...
@kiran_panchavatHave you actually read the question? @MichalG1 explicitly wrote that the lookup does work when it's the last component in the search. If its results are further processed down the search pipeline it doesn't. @MichalG1Have you compared search.log from both those cases? It is indeed a bit strange that it does/doesn't work this way.
Hello appd experts, I've installed a db agent collector using windows authentication as per image below to connect to MS SQL Server. When starting the dbagent collector facing the issue below: ... See more...
Hello appd experts, I've installed a db agent collector using windows authentication as per image below to connect to MS SQL Server. When starting the dbagent collector facing the issue below: 18 Mar 2025 14:02:14,194 ERROR [DBAgent-4] ADBMonitorConfigResolver: - [SQL_MES_TEST] Failed to resolve DB topological structure. com.microsoft.sqlserver.jdbc.SQLServerException: This driver is not configured for integrated authentication. ClientConnectionId:84b24ba8-aae7-4084-a050-0de9e2f2ffea at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:3145) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.AuthenticationJNI.<init>(AuthenticationJNI.java:72) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3937) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(SQLServerConnection.java:85) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:3926) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7375) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3200) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2707) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:2356) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:2207) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:1270) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:861) ~[mssql-jdbc-8.4.0.jre8.jar:?] at java.sql.DriverManager.getConnection(DriverManager.java:682) ~[java.sql:?] at java.sql.DriverManager.getConnection(DriverManager.java:191) ~[java.sql:?] at com.singularity.ee.agent.dbagent.collector.db.relational.DriverManager.getConnection(DriverManager.java:58) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.singularity.ee.agent.dbagent.collector.db.relational.DriverManager.getConnectionWithEvents(DriverManager.java:74) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.appdynamics.dbmon.dbagent.task.resolver.ARelationalDBMonitorConfigResolver.initConnection(ARelationalDBMonitorConfigResolver.java:150) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.appdynamics.dbmon.dbagent.task.resolver.MSSQLDBMonitorConfigResolver.resolveTopology(MSSQLDBMonitorConfigResolver.java:118) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.appdynamics.dbmon.dbagent.task.resolver.ADBMonitorConfigResolver.resolveTopologicalStructure(ADBMonitorConfigResolver.java:124) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.appdynamics.dbmon.dbagent.task.resolver.ADBMonitorConfigResolver.run(ADBMonitorConfigResolver.java:205) ~[db-agent.jar:Database Agent v25.1.0.4748 GA compatible with 4.5.2.0 Build Date 2025-01-23] at com.singularity.ee.util.javaspecific.scheduler.AgentScheduledExecutorServiceImpl$SafeRunnable.run(AgentScheduledExecutorServiceImpl.java:122) ~[agent-25.1.0-1223.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) ~[?:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask$Sync.innerRunAndReset(ADFutureTask.java:335) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask.runAndReset(ADFutureTask.java:152) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.access$101(ADScheduledThreadPoolExecutor.java:128) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.runPeriodic(ADScheduledThreadPoolExecutor.java:215) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.run(ADScheduledThreadPoolExecutor.java:253) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.runTask(ADThreadPoolExecutor.java:694) ~[agent-25.1.0-1223.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.run(ADThreadPoolExecutor.java:726) ~[agent-25.1.0-1223.jar:?] at java.lang.Thread.run(Thread.java:1575) ~[?:?] Caused by: java.lang.UnsatisfiedLinkError: no mssql-jdbc_auth-8.4.0.x64 in java.library.path: C:\dbagent\db-agent-25.1.0.4748\auth\x64 at java.lang.ClassLoader.loadLibrary(ClassLoader.java:2442) ~[?:?] at java.lang.Runtime.loadLibrary0(Runtime.java:916) ~[?:?] at java.lang.System.loadLibrary(System.java:2066) ~[?:?] at com.microsoft.sqlserver.jdbc.AuthenticationJNI.<clinit>(AuthenticationJNI.java:51) ~[mssql-jdbc-8.4.0.jre8.jar:?] at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3936) ~[mssql-jdbc-8.4.0.jre8.jar:?] ... 27 more I tried to use the latest dll file(below) from microsoft site within auth\x64 directory. It won't help either. mssql-jdbc_auth-12.8.1.x64.dll Any ideas how to solve this issue? If I use a credential as username and password works fine. The idea here is the customer only wants to create one user for all hosts(servers) using windows authentication. Not creating one user for each server. Imagine to create this user to all servers with different applications.
For anyone upgrading to 9.4.1 and getting this, please read. I had this exact problem when testing (upgrading from 9.3.1 ro 9.4.1) in a sandbox environment. After many "failed" attempts I realized ... See more...
For anyone upgrading to 9.4.1 and getting this, please read. I had this exact problem when testing (upgrading from 9.3.1 ro 9.4.1) in a sandbox environment. After many "failed" attempts I realized this is actually just a normal status and part of the kvstore upgrade process. What actually happens is that the kvstore upgrades in steps. From 4.x -> 5.x -> 6.x -> 7.x  During these step, the status of the kvstore is in failed state. featureCompatibilityVersion : 5.0 ... status : failed ... serverVersion : 5.0.26 featureCompatibilityVersion : 6.0 ... status : failed ... serverVersion : 6.0.15 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: 127.0.0.1:8191] ... status : failed You also cannot rely on the "splunk show standalone-kvupgrade-status" command: /opt/splunk/bin/splunk show standalone-kvupgrade-status Unable to read mongo database version. Check KV Store health.   Just ignore all of these and allow the sytem to run for some +5 minutes. (do not stop splunk!) And then finally the upgrade completes and the status goes to ready. featureCompatibilityVersion : 7.0 ... status : ready ... serverVersion : 7.0.14