All Topics

Top

All Topics

Hi all, today I updated Splunk enterprise from 9.0.5 to 9.1.1. Since the update I see the folliwing messages on the start page: "Laden der App-Liste nicht möglich. Aktualisieren Sie die Seite, um... See more...
Hi all, today I updated Splunk enterprise from 9.0.5 to 9.1.1. Since the update I see the folliwing messages on the start page: "Laden der App-Liste nicht möglich. Aktualisieren Sie die Seite, um den Vorgang zu wiederholen." and "Laden von gemeinsamen Aufgaben nicht möglich. Aktualisieren Sie die Seite, um den Vorgang zu wiederholen." Reloading the page doesn't solve the issue. A reboot of Windows where Splunk is installed doesn't help either. Splunk seems to work fine. But do you have any ideas how to solve the issue? Thank You.
I want to get the volume for a specific word "ERROR" occurrence in a specific server in last 7 days. How to do that? Please help.
Splunk shows duplicate events in search results when there are no duplicates in the source file.
I have a a saved search for vpcflow logs sourcetype which searches for particular CIDR (src_ip & dest_ip) but takes almost 3 4 hrs run the query when it searches for last 6months , I want the output ... See more...
I have a a saved search for vpcflow logs sourcetype which searches for particular CIDR (src_ip & dest_ip) but takes almost 3 4 hrs run the query when it searches for last 6months , I want the output for external reporting , what is the best method forward to save time & resources . we dont have data models on our search head.
We are using Splunk OPC Add-On to bring in some tags. We have two specific tags that we are currently looking at. Tag 1's value will always be "Productive" or "Non-productive". Tag 2's value will be ... See more...
We are using Splunk OPC Add-On to bring in some tags. We have two specific tags that we are currently looking at. Tag 1's value will always be "Productive" or "Non-productive". Tag 2's value will be a current string value or blank.  We are hoping that we can alert if Tag1 = Productive & Tag2 != "", then we can return a result and alert off of this result.  I have tried: "Tag1"="Productive" AND NOT isnull("Tag2") but that doesn't return any results when there should be a few results. I'm not sure if I need to combine these somehow?
Hi Team, I have below row logs: 2023-08-30 07:43:28.671 [INFO ] [Thread-18] ReadFileImpl - ebnc event balanced successfully My current query: index="abc" sourcetype =600000304_gg_abs_ipc2 source=... See more...
Hi Team, I have below row logs: 2023-08-30 07:43:28.671 [INFO ] [Thread-18] ReadFileImpl - ebnc event balanced successfully My current query: index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCMessage="ebnc event balanced successfully" | table EBNCMessage True This occurs as many time as file got processed I want to show only 1 event ebnc event balanced successfully                         true But its coming 8 times as 8 files go
Hi Team, I have below row logs: 2023-08-30 07:43:29.000 [INFO ] [Thread-18] StatisticBalancer - statisticData: StatisticData [selectedDataSet=13283520, rejectedDataSet=0, totalOutputRecords=2067040... See more...
Hi Team, I have below row logs: 2023-08-30 07:43:29.000 [INFO ] [Thread-18] StatisticBalancer - statisticData: StatisticData [selectedDataSet=13283520, rejectedDataSet=0, totalOutputRecords=20670402, totalInputRecords=0, fileSequenceNum=9226, fileHeaderBusDt=08/29/2023, busDt=08/29/2023, fileName=TRIM.UNB.D082923.T045920]   2023-08-30 05:36:30.678 [INFO ] [Thread-19] StatisticBalancer - statisticData: StatisticData [selectedDataSet=27, rejectedDataSet=0, totalOutputRecords=27, totalInputRecords=0, fileSequenceNum=6395, fileHeaderBusDt=08/29/2023, busDt=08/29/2023, fileName=TRIM.CNX.D082923.T052656] I want to fetch records only for highlighted file  not for other files but I am getting for both the files. My current query: index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "StatisticBalancer - statisticData: StatisticData" |rex "totalOutputRecords=(?<totalOutputRecords>),busDt=(?<busDt>),fileName=(?<fileName>),totalAchCurrOutstBalAmt=(?<totalAchCurrOutstBalAmt>),totalAchBalLastStmtAmt=(?<totalAchBalLastStmtAmt>),totalClosingBal=(?<totalClosingBal>),totalRecordsWritten=(?<totalRecordsWritten>),totalRecords=(?<totalRecords>)" |table busDt fileName totalAchCurrOutstBalAmt totalAchBalLastStmtAmt totalClosingBal totalRecordsWritten totalRecords  
Does anyone have a creative solution or know if there is an obscure way in Splunk to prepend a certain string to the beginning of email subjects that are sent from Splunk? I'm looking for something t... See more...
Does anyone have a creative solution or know if there is an obscure way in Splunk to prepend a certain string to the beginning of email subjects that are sent from Splunk? I'm looking for something that users could not override when they create an alert or report. I do know about the email footer option in the email setup screen to add a static footer that cannot be altered by users and we do employ that as well. I'm trying to do something like this with the email subject. Thanks.
Hi Everyone, Is there someone who knows how to export Splunk ITSI Entities to a CSV file including their aliases, fields and services? Thanks!
Hi there,  im pretty new in Splunk, so sorry if it is easy task. I have following example events in my index -  It is a export from the Zabbix monitoring   8/31/23 4:39:31.000 PM { [-] descri... See more...
Hi there,  im pretty new in Splunk, so sorry if it is easy task. I have following example events in my index -  It is a export from the Zabbix monitoring   8/31/23 4:39:31.000 PM { [-] description: mem Heap Memory used groups: [ [+] ] hostname: WMS_Name itemid: 186985 ns: 941726183 tags.application: Memory type: 3 value: 1199488000 } Show as raw text description = mem Heap Memory usedhost = WMS_NAME1 hostname = WMS_NAME1 source = http:its_wms_zabbixvalue = 1199488000 8/31/23 4:39:31.000 PM { [-] description: mem Heap Memory max groups: [ [+] ] hostname: WMS_NAME1 itemid: 186984 ns: 883128205 tags.application: Memory type: 3 value: 8589934592 } Show as raw text description = mem Heap Memory maxhost = WMS_NAME1 hostname = WMS_NAME1 source = http:its_wms_zabbixvalue = 8589934592   Search query:   index="some_index" sourcetype="zabbix:history" hostname="WMS_NAME1" description="mem Heap Memory used" OR description="mem Heap Memory max"| spath "groups{}" | search "groups{}"="Instances/Tests*" | eval ValueMB=value/1024/1024| table _time, hostname, ValueMB     In this case, there are two events - one for java heap memory usage and one for java heap max memory.  Is there any way, how to rename values variable  based on the description in a event and join them in one table under the same time? Or maybe join both events in one? The main goal is to display both values in one graph and be able to monitor long term usage.  I found a way with using multisearch, but it takes too much time in processing and i believe there will be a more simple way.  Thank you in advance for any hint    
The third leaderboard update for The Great Resilience Quest is out >>  Check out the Leaderboard   Kudos to our current front-runners and a warm welcome to our new players!   There's st... See more...
The third leaderboard update for The Great Resilience Quest is out >>  Check out the Leaderboard   Kudos to our current front-runners and a warm welcome to our new players!   There's still a lot of prizes to win as we're working on launching levels 3 and 4 soon! If you are new to the quest, it's not too late to dive in. Join the game and bolster your knowledge on achieving digital resilience with Splunk. Best regards, Splunk Customer Success  
In the last month, the Splunk Threat Research Team (STRT) has had 4 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.8.0, v4.9.0, v4.10.0 and v4.11.1). With ... See more...
In the last month, the Splunk Threat Research Team (STRT) has had 4 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.8.0, v4.9.0, v4.10.0 and v4.11.1). With these releases, there are 24 new detections, 27 updated detections and 8 new analytic stories now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include:  An analytic story with detections related to known activities of the group Flax Typhoon Detections for organizations using Adobe ColdFusion related to critical vulnerabilities CVE-2023-29298 and CVE-2023-26360 Detection for critical vulnerability Ivanti Sentry (CVE-2023-38035) An analytic story to detect suspicious activities potentially related to Ave Maria (aka Warzone) RAT Updated detections related to Azure Active Directory  New detections for Windows Certificate Services and Active Directory Discovery  Updated detections for Clop ransomware variants  An analytic story related Citrix ShareFile RCE CVE-2023-24489 Detections for zero-day vulnerabilities in Ivanti Endpoint Manager Mobile (EPMM) CVE-2023-35078 and CVE-2023-35082. A new analytic for hunting events associated with Splunk Vulnerability Disclosure SVD-2023-0606 in which an attacker can use a specially crafted web URL in their browser to cause log file injection. The attack inserts American National Standards Institute (ANSI) escape codes into specific files using a terminal program that supports those escape codes. The attack requires a terminal program that supports the translation of ANSI escape codes and requires additional user interaction to execute successfully.  New Analytic Stories:  Juniper JunOS Remote Code Execution Flax Typhoon Windows Error Reporting Service Elevation of Privilege Vulnerability Ivanti Sentry Authentication Bypass CVE-2023-38035 Adobe ColdFusion Arbitrary Code Execution CVE-2023-29298 & CVE-2023-26360 Warzone RAT Ivanti EPMM Remote Unauthenticated Access Citrix ShareFile RCE CVE-2023-24489 New Detections:  Juniper Networks Remote Code Execution Exploit Detection Windows SQL Spawning CertUtil Ivanti Sentry Authentication Bypass Adobe ColdFusion Access Control Bypass Adobe ColdFusion Unauthenticated Arbitrary File Read Splunk DOS via printf search function Windows Bypass UAC via Pkgmgr Tool Windows Mark of The Web Bypass Windows Modify Registry MaxConnectionPerServer Windows Unsigned DLL Side-Loading Detect Certify Command Line Arguments (External Contributor @nterl0k) Detect Certify with PowerShell Script Block Logging (External Contributor @nterl0k) Windows Steal Authentication Certificates - ESC1 Authentication (External Contributor @nterl0k) Windows Suspect Process with Authentication Traffic (External Contributor @nterl0k) Ivanti EPMM Remote Unauthenticated API Access CVE-2023-35078 Ivanti EPMM Remote Unauthenticated API Access CVE-2023-35082 Citrix ShareFile Exploitation CVE-2023-24489 Windows Powershell RemoteSigned File PowerShell Script Block With URL Chain (External contributor: Steven Dick) PowerShell WebRequest Using Memory Stream (External contributor: Steven Dick) Suspicious Process Executed From Container File (External contributor: Steven Dick) Windows Registry Payload Injection (External contributor: Steven Dick) Windows Scheduled Task Service Spawned Shell (External contributor: Steven Dick Splunk Unauthenticated Log Injection Web Service Log Updated Detections:  Azure AD Global Administrator Role Assigned Azure AD Multiple Users Failing to Authenticate from IP Azure AD Service Principal Owner Added Azure AD Unusual Number of Failed Authentications from IP Azure AD Service Principal Created Azure AD Privileged Role Assigned Azure AD Privileged Authentication Administrator Role Assigned Azure AD Application Administrator Role Assigned Azure AD Multi-Factor Authentication Disabled Azure AD External Guest User Invited Azure AD User Enabled and Password Reset Azure AD Service Principal New Client Credentials Azure AD New Federated Domain Added Azure AD New Custom Domain Added Azure AD Successful Single-Factor Authentication Azure AD Authentication Failed During MFA Challenge Azure AD Successful PowerShell Authentication Azure AD Multiple Failed MFA Requests for User Azure AD User Immutable ID Attribute Updated Azure Active Directory High Risk Sign-in Unusually Long Command Line Suspicious Copy on System32 Clop Common Exec Parameter (External contributor: DipsyTipsy) O365 Added Service Principal O365 New Federated Domain Added O365 Excessive SSO logon errors Splunk risky Command Abuse disclosed February 2023 For all our tools and security content, please visit research.splunk.com. — The Splunk Threat Research Team
Hello, I'm new to Splunk and despite searching extensively on this community site, I was not able to find a solution for what I thought was a rather simple problem. I would like to list, for each ... See more...
Hello, I'm new to Splunk and despite searching extensively on this community site, I was not able to find a solution for what I thought was a rather simple problem. I would like to list, for each field in my index, the list of top 10 values. I've tried different commande with stats values and top, and the following one gives me what's closest, but the output is messy:     index = my_index | multireport [top limit=10 field_1] [top limit=10 field_2] [top limit=10 field_3]     I do get the top values of each field presented in different columns of the output, but also get many empty cells: field_1 field_2 field_3   a top value of field_2     a top value of field_2     a top value of field_2       a top value of field_3     a top value of field_3 a top value of field_1     a top value of field_1       while i would like something like that: field_1 field_2 field_3 a top value of field_1 a top value of field_2 a top value of field_3 a top value of field_1 a top value of field_2 a top value of field_3   a top value of field_2     Has someone any idea how I could cleanup the output, and, ideally, easily loop through the column names so I don't have to write their name manually. Thank!
Hi, I'm in the middle of testing deployment of the UF for a new setup and I started with 9.0.1, deploying it with ansible from a local yum repository as the initial push. (that' s the gist of it, bit... See more...
Hi, I'm in the middle of testing deployment of the UF for a new setup and I started with 9.0.1, deploying it with ansible from a local yum repository as the initial push. (that' s the gist of it, bit more complex infrastructure behind it but not really relevant) But now 9.1.1 came out which was pointed out to me due to a security alert so I updated the package on our repository, hit 'yum update'  on one of my test servers, and this broke the UF. Apparently it needs to be started manually once with '--accept-license --answer-yes --no-prompt'  to complete the upgrade and accept the license .. again .. ? Is there a clever way of dealing with this so it just works after upgrading the rpm ? Short of modifying the rpm's spec file so it does some starting and stopping while the rpm is being upgraded. Manually doing this in case there happens to be an update is just not an option due to the number of hosts, our regular updates run unattended with basically just a 'yum/dnf update -y' Modifying the systemd file so it just starts with the required parameters does not appear be working with the '_internal_launch_under_systemd' , replacing that with the old 'start etc' makes the UF not work with systemd anymore. RHEL9 is going to forego the init.d folder I think so using older more flexible sysV scripts is not an option either. Any sort of manual intervention when there happens to be a new version is highly undesirable.
Hi Team, How can I fetch the start and end time from below logs: 2023-08-30 00:29:00.018 [INFO ] [pool-3-thread-1] ReadControlFileImpl - Reading Control-File /absin/CARS.HIERCTR.D082923.T002302 20... See more...
Hi Team, How can I fetch the start and end time from below logs: 2023-08-30 00:29:00.018 [INFO ] [pool-3-thread-1] ReadControlFileImpl - Reading Control-File /absin/CARS.HIERCTR.D082923.T002302 2023-08-30 07:43:29.020 [INFO ] [Thread-18] FileEventCreator - Completed Settlement file processing, TRIM.UNB.D082923.T045920 records processed: 13283520 I want this start time and end time can someone help me with query my current query: index="abc"sourcetype ="600000304_gg_abs_ipc2" source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Reading Control-File /absin/CARS.HIERCTR."    
I have a simple lookup file with two fields, user and host user                                host Bob                                   1 Dave                                  2 Karen          ... See more...
I have a simple lookup file with two fields, user and host user                                host Bob                                   1 Dave                                  2 Karen                                 x Sue                                     y I want exclude any results from my search where there is any combination of host AND user where they match any value from the lookup. For example, exclude any results where: the user is Bob and the host is either 1, 2, x or y the user is either Bob, Dave, Karen or Sue and the host is x  I'm playing with this search, which appears to work but unsure if there's a flaw in my logic, or if there's a better way to do it?     index=proxy sourcetype="proxy logs" user="*" NOT ([| inputlookup lookup.csv | fields user | format ] AND [| inputlookup lookup.csv | fields host | format ]) | stats c by username, host     Thanks in advance
index=main sourcetype=_json status="True" | stats count(status) as True by name | append [| search index=main sourcetype=json status="False" | stats count(status) as False by name]  | append [| searc... See more...
index=main sourcetype=_json status="True" | stats count(status) as True by name | append [| search index=main sourcetype=json status="False" | stats count(status) as False by name]  | append [| search index=main sourcetype=json status="*" | stats count(status) as Total by name] | stats sum(True) as True sum(False) as False sum(Total) as Total max(Performance) as Performance by name | eval Percentage=round(((True/Total)*100),0)  | fields  Percentage Is it possible to show trendline and if Percentage up or down compart to last month.
Hello Team, I have logs with the below pattern 08/31/2023 8:00:00:476 am ........ count=0 08/31/2023 8:00:00:376 am ........ process started 08/31/2023 8:00:00:376 am...... XXX Process I need th... See more...
Hello Team, I have logs with the below pattern 08/31/2023 8:00:00:476 am ........ count=0 08/31/2023 8:00:00:376 am ........ process started 08/31/2023 8:00:00:376 am...... XXX Process I need the process name and the count to be displayed together but I dont have any common values/names/strings to match them. I have 4 similar process and the count together in the logs..is there a way on how we can match them together. Any help is much appreciated.
Hi, I'm using a splunk enterprise based in a docker image, the dashboard is getting all the default windows events  but isn't getting sysmon events, I've created the inputs.conf file in the local... See more...
Hi, I'm using a splunk enterprise based in a docker image, the dashboard is getting all the default windows events  but isn't getting sysmon events, I've created the inputs.conf file in the local directory, in that file i'm forwarding both "Microsoft-Windows-Windows Firewall With Advanced Security/Firewall" and "Microsoft-Windows-Windows-Sysmon/Operational" events, I see the Firewall events in the dashboard and see that as a source but I don't get any of the sysmon events and it doesn't show up as a source, I've confirmed that the events are in the event viewer on the client, I have installed the application "Splunk Add-on for Sysmon", and in another seperate splunk enterprise docker image I tried installing the "Microsoft Sysmon Add-on" application,  In the inputs.conf file I have tried (on different instances):  [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = false  or: [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 index = main renderXml = true or: [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = true none have worked, I have installed the universal forwarder both manually and using the command line to rule out the quite install, I have even tried giving the forwarder service full admin rights to rule out issues accessing the logs , but I am still not getting any sysmon events in the dashboard, what am I missing?  
How to see daily licensing usage of 1 index in Splunk.