All Topics

Top

All Topics

I'm trying to UNION two different tables containing info on foreign traffic - the first table is a log with time range earliest=-24h latest=-1h. The second are logs of those same systems for the full... See more...
I'm trying to UNION two different tables containing info on foreign traffic - the first table is a log with time range earliest=-24h latest=-1h. The second are logs of those same systems for the full 24 hours (earliest=-24h latest=now()). My search: | union [ search index=<index1> src_ip IN (<srcvalues>) AND dest_ip!=<ipvalues> NOT dest_location IN ("<locvalues>") earliest=-24h latest=-1h | eval dest_loc_ip1=dest_location. "-" .dest_ip | stats DC(dest_loc_ip1) as oldconnections by src_ip] [ search index=<index1> src_ip IN (<srcvalues>) AND dest_ip!=<ipvalues> NOT dest_location IN ("<locvalues>") earliest=-24h latest=now() | eval dest_loc_ip2=dest_location. "-" .dest_ip | stats DC(dest_loc_ip2) as allconnections by src_ip] | fields src_ip oldconnections allconnections I am trying to compare the values of oldconnections vs allconnections for only the original systems (basically a left join), but for some reason, the allconnections shows all null values. I get a similar issue when trying to left join - the allconnections values are not consistent to the values when I run the search by itself. I can run the two searches separately with the expected result, so I'm guessing there's an error in my UNION syntax and ordering. Thanks for the help! -also open to other ways to solve this
New install of Splunk 9.3  on RedHat Ent 7.9  Initial installly successful.    Changed indexer to peer indexer & restarted splunk - splunk service loads successfully until:  " Waiting for web inter... See more...
New install of Splunk 9.3  on RedHat Ent 7.9  Initial installly successful.    Changed indexer to peer indexer & restarted splunk - splunk service loads successfully until:  " Waiting for web interface at https /127.0.0.1:8000 to be available...." Systemd fails with  - Warning: web interface does not seem to be available! splunk.service:   control process exited, code=exited status=1  Failed to start SYSV: Splunk indexer service Unit splunk.service entered failed state.   Have searched for solution to no avail...    Firewall is disabled.  
Hello All, I am trying to plot the count of events per day over a span of a week by using scatterplot matrix as the visualization to see if there is any linear relation observed. And I need to plot... See more...
Hello All, I am trying to plot the count of events per day over a span of a week by using scatterplot matrix as the visualization to see if there is any linear relation observed. And I need to plot 4 charts, one for each week of the month since there are restrictions on number of datapoints a single chart can publish. But, when I plot more than one chart, the dashboard breaks down and I start getting error: - Error rendering Scatterplot Matrix visualization Thus, I need your guidance to resolve the error. Thank you Taruchit
Insight on my problem below is appreciated! I am using DB Connect to attempt to connect to a MSSQL database. When I Save/Edit the connection I get the following error from Splunkweb:   The drive... See more...
Insight on my problem below is appreciated! I am using DB Connect to attempt to connect to a MSSQL database. When I Save/Edit the connection I get the following error from Splunkweb:   The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not conform to algorithm constraints". ClientConnectionId:XXXXXXXXXXXXXXXXXX   And the following (combination) error from splunk_app_db_connect_server.log and splunk_app_db_connect_audit_server.log:   com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not conform to algorithm constraints". ClientConnectionId:XXXXXXXXXXXXXXXXXX ........................... Caused by: java.security.cert.CertPathValidatorException: Algorithm constraints check failed on signature algorithm: SHA1withRSA at java.base/sun.security.provider.certpath.AlgorithmChecker.check(AlgorithmChecker.java:237) at java.base/sun.security.ssl.AbstractTrustManagerWrapper.checkAlgorithmConstraints(SSLContextImpl.java:1661) ... 99 common frames omitted Collapse   I have tried the following to resolve the problem with no luck: Added the following to the DB Connect Task Server JVM Options: -Djdk.tls.client.protocols="TLSv1,TLSv1.1,TLSv1.2" Added the following parameters to the JDBC url: encrypt=true;trustServerCertificate=true; I have also installed and attempted to run the DB Connect troubleshooting tool (ran using the following command: python3 -m troubleshooting_tools.start)   |----|----|----|----|----| | DB Connect | | Troubleshooting Tools | |----|----|----|----|----| Which tool do you want to use? 1. Troubleshoot Starts 2. Services Status 3. Troubleshoot Connections 4. Troubleshoot Inputs : 3 Troubleshoot Connections Splunk URL: localhost Splunk management port: 8089 Splunk username (Default value is <admin>): admin admin Splunk password: ******** Connection name: MY_CONNECTION Connector path: %PATH_TO_CONNECTOR_JAR% JDBC path: %PATH_TO_JDBC_DRIVER_JAR%   Which leads to the following output   An error occurred while trying to get the connection with the name : MY_CONNECTION. Error message: Data must be padded to 16 byte boundary in CBC mode   In addition, here is some information regarding my environment: OS Oracle Linux 9 Splunk Enterprise Splunk 9.1.0.2  Splunk DB Connect 3.14.1  Splunk DBX Add-on for Microsoft SQL Server JDBC 1.2.0  Manually installed additional Microsoft JDBC Driver 12.4 for SQL Server driver mssql-jdbc-12.4.1.jre11.jar ***The above errors are occurring for both Connection Types. JAVA openjdk 11.0.20
Hello Everybody We've installed Splunk 9.1.1 OnPrem and now, unfortunately the Browser Icon Changer App will not work. Message: HTML Dashboards are no longer supported. In Splunkbase, Version ... See more...
Hello Everybody We've installed Splunk 9.1.1 OnPrem and now, unfortunately the Browser Icon Changer App will not work. Message: HTML Dashboards are no longer supported. In Splunkbase, Version 9.1 is listed as supported. Did we anything wrong? BR, Martin
I am getting different sourcetype name in my logs. But I want the sourcetype name as per conf file. Below are the screenshots of input.conf, props.conf & transforms.conf . Props & Transforms ... See more...
I am getting different sourcetype name in my logs. But I want the sourcetype name as per conf file. Below are the screenshots of input.conf, props.conf & transforms.conf . Props & Transforms   Inputs      
We have a SEDCMD masking a field that correctly masks data as shown in the event however in the expanded info on the event it is not masked.  Anyone seen this before?  Working with Proofpoint logs. 
Hello, How can I use Splunk to run a report for all DFS users who logged into VPN last week, 9/11-9/15 I'll need to be able to view the usernames. We have a Cisco environment. Thank you Anthony
Hi, is there a query to list all the queries that time out in Splunk Cloud? Thank you  Kind regards Marta
Hi, is there a query to list all the queries that time out in Splunk Cloud? Thank you  Kind regards Marta  
Is there away to point to an existing event in Splunk using a URI link like https://mysplunk.mycompany.com/....
Hello, I would like to know if it is possible to add an hyperlink in a table cell/column. I have a column title Link and the values are URLs, so I would like them to be clickable as a link. I kn... See more...
Hello, I would like to know if it is possible to add an hyperlink in a table cell/column. I have a column title Link and the values are URLs, so I would like them to be clickable as a link. I know this can be achieved by editing XML source, but this is not possible in Dashboard Studio, right? Please let me know if there is a way to do this. Many thanks.
Hello All, Can we implement time series analysis and anomaly detection in Splunk by using the approach of Matrix Profile? If yes, can you please suggest an approach considering we need to fetch Euc... See more...
Hello All, Can we implement time series analysis and anomaly detection in Splunk by using the approach of Matrix Profile? If yes, can you please suggest an approach considering we need to fetch Euclidean distance of multiple sub-sequences for a given timeseries data and then make decisions. Thank you Taruchit 
Hi Everybody, Could you pls explain which works faster - to have 1 indexer server, high performance (48CPU, 128GB) or 2 indexers in cluster (each 24CPU, 64GB)?   thanks for reply.   regards, paw... See more...
Hi Everybody, Could you pls explain which works faster - to have 1 indexer server, high performance (48CPU, 128GB) or 2 indexers in cluster (each 24CPU, 64GB)?   thanks for reply.   regards, pawelF
Hi all, I have custom summary index, which is having the required fields from many indexes in order to make a dashboard. The problem is when P2 in first panel shows a count of 36, we have a drilldo... See more...
Hi all, I have custom summary index, which is having the required fields from many indexes in order to make a dashboard. The problem is when P2 in first panel shows a count of 36, we have a drilldown to these numbers so that, we can check more details to it, at that time, count mismatches, because, the custom summary index gets refreshed in 2mins, and dashboard takes time to load. Please let me know, how to fix this so that, in drilldown panel upon load, count should match the first panel.
Hi, I have same field that value has to compared between 2 search queries. So, Kindly help on below.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<Req... See more...
Hi, I have same field that value has to compared between 2 search queries. So, Kindly help on below.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<RequestID>\d+)" |fillnull value=NULL RequestID |search RequestID!=NULL |table _time,Country,Environment,appID,LogMessage |append [search index=xyz |search appid=12345 message="*|osv|*"  level="error" |search `mymacrocompo`  |rex "trace-id.(?<RequestID1>\d+)" |fillnull value=NULL RequestID1 |search RequestID1!=NULL |table LogMessage1] |eval Errorlogs=if(RequestID=RequestID1,"LogMessage1", "NULL") In the above query, we have RequestID in the main query and the sub query as well. we have to find out the error logs based on RequestID which means if RequestID matches with RequestID1, need to dispaly the LogMessage1.  
I need a query that extracts TLDs from events and compares the results with a lookup table with blocklisted TLDs
Hello All, I need your help for using head command by passing the parameters at run time. The background of the above is as follows: - I am working on building a SPL to identify anomalous events i... See more...
Hello All, I need your help for using head command by passing the parameters at run time. The background of the above is as follows: - I am working on building a SPL to identify anomalous events in time series dataset. I fetched average of the count of all events each hour and then compared with moving average to identify the datapoints and time instances when the average count of events during an hour were significantly greater than moving average at that point of time. To understand the term "significantly greater", I computed the difference between average count of events for a day and the moving average up to the day and determine the percentage of difference with respect to moving average. The observations varied across different datasets. That is for dataset 1, out of all events, 10% of the events had percentage of difference>=90%. However, for dataset 2, out of all events, 20% of the events had percentage of difference>=90%. Thus, I decided to sort the results in descending order of percentage of difference and fetched first 10% of the total events by using head command. Since the count of events returned varies for each dataset, how to compute and fetch 10% of events when the dataset is sorted in descending order based on percentage of difference? Please share if I need to clarify or share any more details to articulate the above query better. Thank you Taruchit
Summary: On a CentOS Stream 9 system, after installing Splunk in /opt/splunk and configuring it to start on boot with systemd, I've noticed unusual behavior. Using manual Splunk commands (/opt/splun... See more...
Summary: On a CentOS Stream 9 system, after installing Splunk in /opt/splunk and configuring it to start on boot with systemd, I've noticed unusual behavior. Using manual Splunk commands (/opt/splunk/bin/splunk [start | stop | restart]) alters the Splunkd.service file in /etc/systemd/system/, creating a timestamped backup. This change prevents Splunk from starting using systemctl commands and consequently on boot, defeating the purpose of the systemd setup. Using chattr to make the service file immutable is a current workaround. This behavior seems specific to CentOS Stream 9. How to recreate issue: On a centos stream 9 machine, installed splunk under /opt/splunk, and run splunk as user 'splunk'. Enable boot-start with systemd-managed 1, after stopping Splunk. After enabling boot-start, a file will be created at /etc/systemd/system/Splunkd.service. Starting and stopping splunk using systemctl works fine, and normal. However, if you run sudo /opt/splunk/bin/splunk [start | stop | restart], splunk itself will change the/etc/systemd/system/Splunkd.service, and create a backup with a timestamp, e.g. Splunkd.service_2023_09_21_06_49_05. When trying to start with systemctl again: e.g. sudo systemctl start Splunkd     Failed to start Splunkd.service: Unit Splunkd.service failed to load properly, please adjust/correct and reload service manager: Device or resource busy See system logs and 'systemctl status Splunkd.service' for details.     This will lead to Splunk not starting after reboot, which is the whole point of enabling systemd.   This error message shows up, because the Splunkd.service file has been altered. To get systemctl working again, i run sudo systemctl daemon-reload But as soon as one tries to do a manual start|stop|restart command, the same issue arises.   When diffing the new service file and old service file: diff Splunkd.service Splunkd.service_2023_09_21_06_49_05     26c26 < MemoryLimit=3723374592 --- > MemoryLimit=3723378688     memoryLimit is the only value that is changed for each subsequent 'backup' of the service file. It just switches between these two values   Mr chat.gpt suggested to make the service file non-immutable with sudo chattr +i /etc/systemd/system/Splunkd.service After this change, whenever doing manual start | stop | restart, you get a WARNING message: But it won't **bleep** up your Service file, and hence splunk will start after reboot.  So it is Splunk itself who is changing the Service file. However, this issue was discovered in Centos Stream 9, and cannot be replicated in earlier versions. Anybody know what may have caused this weird error?
HI,  I am trying to learn more about the certificates found within the document /etc/auth/appsCA.pem . I'm referring to Splunk's default certificates, Global Sign Root CA, Global Sign ECC, Digi... See more...
HI,  I am trying to learn more about the certificates found within the document /etc/auth/appsCA.pem . I'm referring to Splunk's default certificates, Global Sign Root CA, Global Sign ECC, DigiCert Global Root, ISRG Root, IdenTrust Commercial Root. Are they safe? After changing the certificate configuration with my self-signed certificates and merging the CA Splunk certificates to make Splunkbase work properly ( This case here ), I wondered if they are all necessary for Splunk to work successfully or only some of them. Is there a documentation page or can someone explain the use of each of the certificates? Thanks in advance,