All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to run different filter in an index search based on a condition in dropdown below? The second filter works for both ipv4 and ipv6, but it is slowing down the search.  I don't want ipv... See more...
Is it possible to run different filter in an index search based on a condition in dropdown below? The second filter works for both ipv4 and ipv6, but it is slowing down the search.  I don't want ipv4 going through my filter for ipv6. Thanks If select IPv4 dropdown box > select 1.1.1.1 ip_token=1.1.1.1 Search: | index=vulnerability_index ip="$ip_token$" if select IPv6 dropdown box > select  2001:db8:3333:4444:5555:6666::2101 ip_token=2001:db8:3333:4444:5555:6666::2101 Search: | index=vulnerability_index | rex mode=sed field=ip "s/<regex>/<replacement>/<flags>" | search ip="$ip_token$"
Hello Splunk community, I have an issue with a Splunk Deployment Server where FS /var is of size 30Gb and currently 22G are being used by the log "uncategorised.log" under the path /var/log/syslog. ... See more...
Hello Splunk community, I have an issue with a Splunk Deployment Server where FS /var is of size 30Gb and currently 22G are being used by the log "uncategorised.log" under the path /var/log/syslog. Is it viable/possible to delete that log or make a backup of it to a tape or a different server.?  
I'm trying to UNION two different tables containing info on foreign traffic - the first table is a log with time range earliest=-24h latest=-1h. The second are logs of those same systems for the full... See more...
I'm trying to UNION two different tables containing info on foreign traffic - the first table is a log with time range earliest=-24h latest=-1h. The second are logs of those same systems for the full 24 hours (earliest=-24h latest=now()). My search: | union [ search index=<index1> src_ip IN (<srcvalues>) AND dest_ip!=<ipvalues> NOT dest_location IN ("<locvalues>") earliest=-24h latest=-1h | eval dest_loc_ip1=dest_location. "-" .dest_ip | stats DC(dest_loc_ip1) as oldconnections by src_ip] [ search index=<index1> src_ip IN (<srcvalues>) AND dest_ip!=<ipvalues> NOT dest_location IN ("<locvalues>") earliest=-24h latest=now() | eval dest_loc_ip2=dest_location. "-" .dest_ip | stats DC(dest_loc_ip2) as allconnections by src_ip] | fields src_ip oldconnections allconnections I am trying to compare the values of oldconnections vs allconnections for only the original systems (basically a left join), but for some reason, the allconnections shows all null values. I get a similar issue when trying to left join - the allconnections values are not consistent to the values when I run the search by itself. I can run the two searches separately with the expected result, so I'm guessing there's an error in my UNION syntax and ordering. Thanks for the help! -also open to other ways to solve this
New install of Splunk 9.3  on RedHat Ent 7.9  Initial installly successful.    Changed indexer to peer indexer & restarted splunk - splunk service loads successfully until:  " Waiting for web inter... See more...
New install of Splunk 9.3  on RedHat Ent 7.9  Initial installly successful.    Changed indexer to peer indexer & restarted splunk - splunk service loads successfully until:  " Waiting for web interface at https /127.0.0.1:8000 to be available...." Systemd fails with  - Warning: web interface does not seem to be available! splunk.service:   control process exited, code=exited status=1  Failed to start SYSV: Splunk indexer service Unit splunk.service entered failed state.   Have searched for solution to no avail...    Firewall is disabled.  
Hello All, I am trying to plot the count of events per day over a span of a week by using scatterplot matrix as the visualization to see if there is any linear relation observed. And I need to plot... See more...
Hello All, I am trying to plot the count of events per day over a span of a week by using scatterplot matrix as the visualization to see if there is any linear relation observed. And I need to plot 4 charts, one for each week of the month since there are restrictions on number of datapoints a single chart can publish. But, when I plot more than one chart, the dashboard breaks down and I start getting error: - Error rendering Scatterplot Matrix visualization Thus, I need your guidance to resolve the error. Thank you Taruchit
Insight on my problem below is appreciated! I am using DB Connect to attempt to connect to a MSSQL database. When I Save/Edit the connection I get the following error from Splunkweb:   The drive... See more...
Insight on my problem below is appreciated! I am using DB Connect to attempt to connect to a MSSQL database. When I Save/Edit the connection I get the following error from Splunkweb:   The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not conform to algorithm constraints". ClientConnectionId:XXXXXXXXXXXXXXXXXX   And the following (combination) error from splunk_app_db_connect_server.log and splunk_app_db_connect_audit_server.log:   com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not conform to algorithm constraints". ClientConnectionId:XXXXXXXXXXXXXXXXXX ........................... Caused by: java.security.cert.CertPathValidatorException: Algorithm constraints check failed on signature algorithm: SHA1withRSA at java.base/sun.security.provider.certpath.AlgorithmChecker.check(AlgorithmChecker.java:237) at java.base/sun.security.ssl.AbstractTrustManagerWrapper.checkAlgorithmConstraints(SSLContextImpl.java:1661) ... 99 common frames omitted Collapse   I have tried the following to resolve the problem with no luck: Added the following to the DB Connect Task Server JVM Options: -Djdk.tls.client.protocols="TLSv1,TLSv1.1,TLSv1.2" Added the following parameters to the JDBC url: encrypt=true;trustServerCertificate=true; I have also installed and attempted to run the DB Connect troubleshooting tool (ran using the following command: python3 -m troubleshooting_tools.start)   |----|----|----|----|----| | DB Connect | | Troubleshooting Tools | |----|----|----|----|----| Which tool do you want to use? 1. Troubleshoot Starts 2. Services Status 3. Troubleshoot Connections 4. Troubleshoot Inputs : 3 Troubleshoot Connections Splunk URL: localhost Splunk management port: 8089 Splunk username (Default value is <admin>): admin admin Splunk password: ******** Connection name: MY_CONNECTION Connector path: %PATH_TO_CONNECTOR_JAR% JDBC path: %PATH_TO_JDBC_DRIVER_JAR%   Which leads to the following output   An error occurred while trying to get the connection with the name : MY_CONNECTION. Error message: Data must be padded to 16 byte boundary in CBC mode   In addition, here is some information regarding my environment: OS Oracle Linux 9 Splunk Enterprise Splunk 9.1.0.2  Splunk DB Connect 3.14.1  Splunk DBX Add-on for Microsoft SQL Server JDBC 1.2.0  Manually installed additional Microsoft JDBC Driver 12.4 for SQL Server driver mssql-jdbc-12.4.1.jre11.jar ***The above errors are occurring for both Connection Types. JAVA openjdk 11.0.20
Hello Everybody We've installed Splunk 9.1.1 OnPrem and now, unfortunately the Browser Icon Changer App will not work. Message: HTML Dashboards are no longer supported. In Splunkbase, Version ... See more...
Hello Everybody We've installed Splunk 9.1.1 OnPrem and now, unfortunately the Browser Icon Changer App will not work. Message: HTML Dashboards are no longer supported. In Splunkbase, Version 9.1 is listed as supported. Did we anything wrong? BR, Martin
I am getting different sourcetype name in my logs. But I want the sourcetype name as per conf file. Below are the screenshots of input.conf, props.conf & transforms.conf . Props & Transforms ... See more...
I am getting different sourcetype name in my logs. But I want the sourcetype name as per conf file. Below are the screenshots of input.conf, props.conf & transforms.conf . Props & Transforms   Inputs      
We have a SEDCMD masking a field that correctly masks data as shown in the event however in the expanded info on the event it is not masked.  Anyone seen this before?  Working with Proofpoint logs. 
Hello, How can I use Splunk to run a report for all DFS users who logged into VPN last week, 9/11-9/15 I'll need to be able to view the usernames. We have a Cisco environment. Thank you Anthony
Hi, is there a query to list all the queries that time out in Splunk Cloud? Thank you  Kind regards Marta
Hi, is there a query to list all the queries that time out in Splunk Cloud? Thank you  Kind regards Marta  
Is there away to point to an existing event in Splunk using a URI link like https://mysplunk.mycompany.com/....
Hello, I would like to know if it is possible to add an hyperlink in a table cell/column. I have a column title Link and the values are URLs, so I would like them to be clickable as a link. I kn... See more...
Hello, I would like to know if it is possible to add an hyperlink in a table cell/column. I have a column title Link and the values are URLs, so I would like them to be clickable as a link. I know this can be achieved by editing XML source, but this is not possible in Dashboard Studio, right? Please let me know if there is a way to do this. Many thanks.
Hello All, Can we implement time series analysis and anomaly detection in Splunk by using the approach of Matrix Profile? If yes, can you please suggest an approach considering we need to fetch Euc... See more...
Hello All, Can we implement time series analysis and anomaly detection in Splunk by using the approach of Matrix Profile? If yes, can you please suggest an approach considering we need to fetch Euclidean distance of multiple sub-sequences for a given timeseries data and then make decisions. Thank you Taruchit 
Hi Everybody, Could you pls explain which works faster - to have 1 indexer server, high performance (48CPU, 128GB) or 2 indexers in cluster (each 24CPU, 64GB)?   thanks for reply.   regards, paw... See more...
Hi Everybody, Could you pls explain which works faster - to have 1 indexer server, high performance (48CPU, 128GB) or 2 indexers in cluster (each 24CPU, 64GB)?   thanks for reply.   regards, pawelF
Hi all, I have custom summary index, which is having the required fields from many indexes in order to make a dashboard. The problem is when P2 in first panel shows a count of 36, we have a drilldo... See more...
Hi all, I have custom summary index, which is having the required fields from many indexes in order to make a dashboard. The problem is when P2 in first panel shows a count of 36, we have a drilldown to these numbers so that, we can check more details to it, at that time, count mismatches, because, the custom summary index gets refreshed in 2mins, and dashboard takes time to load. Please let me know, how to fix this so that, in drilldown panel upon load, count should match the first panel.
Hi, I have same field that value has to compared between 2 search queries. So, Kindly help on below.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<Req... See more...
Hi, I have same field that value has to compared between 2 search queries. So, Kindly help on below.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<RequestID>\d+)" |fillnull value=NULL RequestID |search RequestID!=NULL |table _time,Country,Environment,appID,LogMessage |append [search index=xyz |search appid=12345 message="*|osv|*"  level="error" |search `mymacrocompo`  |rex "trace-id.(?<RequestID1>\d+)" |fillnull value=NULL RequestID1 |search RequestID1!=NULL |table LogMessage1] |eval Errorlogs=if(RequestID=RequestID1,"LogMessage1", "NULL") In the above query, we have RequestID in the main query and the sub query as well. we have to find out the error logs based on RequestID which means if RequestID matches with RequestID1, need to dispaly the LogMessage1.  
I need a query that extracts TLDs from events and compares the results with a lookup table with blocklisted TLDs
Hello All, I need your help for using head command by passing the parameters at run time. The background of the above is as follows: - I am working on building a SPL to identify anomalous events i... See more...
Hello All, I need your help for using head command by passing the parameters at run time. The background of the above is as follows: - I am working on building a SPL to identify anomalous events in time series dataset. I fetched average of the count of all events each hour and then compared with moving average to identify the datapoints and time instances when the average count of events during an hour were significantly greater than moving average at that point of time. To understand the term "significantly greater", I computed the difference between average count of events for a day and the moving average up to the day and determine the percentage of difference with respect to moving average. The observations varied across different datasets. That is for dataset 1, out of all events, 10% of the events had percentage of difference>=90%. However, for dataset 2, out of all events, 20% of the events had percentage of difference>=90%. Thus, I decided to sort the results in descending order of percentage of difference and fetched first 10% of the total events by using head command. Since the count of events returned varies for each dataset, how to compute and fetch 10% of events when the dataset is sorted in descending order based on percentage of difference? Please share if I need to clarify or share any more details to articulate the above query better. Thank you Taruchit