All Topics

Top

All Topics

Hi Splunk Community,    I am trying to create a props.conf for the sample log file below.  My goal is to      * Delete the Header tag and remove the data from being ingested.      * Break the in... See more...
Hi Splunk Community,    I am trying to create a props.conf for the sample log file below.  My goal is to      * Delete the Header tag and remove the data from being ingested.      * Break the individual events by starting with ( "library!WindowsService_98!..." OR "processing!ReportServer_0-127!" )     * Extracting time stamp such as ( "!11/26/2023-00:21:18::")   Heres the props.conf that I have so far but it is not working.  --------- [sourcetype_name] disabled = false SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = 80 TIME_FORMAT = %m/%d/%Y-%H:%M:%S LINE_BREAKER = ([\r\n]+)library! SEDCMD-null = (<Header>([\s\S]*?)<\/Header>)   disabled ------------------- sample log file ------------------------- <Header> <Product>Microsoft SQL Server Reporting Services Version 2007.0100.6000.029 ((Random_value).18802-2848 )</Product> <Locale>English (United States)</Locale> <TimeZone>Central Daylight Time</TimeZone> <Path>C:\Program Files\Microsoft SQL Server\MSRS10.MSSQLSERVER\Reporting Services\Logfiles\ReportServerService__11_26_2023_00_00_01.log</Path> <SystemName>hostName01</SystemName> <OSName>Microsoft Windows NT 6.2.9200</OSName> <OSVersion>6.2.9200</OSVersion> <ProcessID>3088</ProcessID> </Header>library!WindowsService_98!1234!11/26/2023-00:00:01:: i INFO: Call to CleanBatch() library!WindowsService_98!1234!11/26/2023-00:00:01:: i INFO: Cleaned 0 batch records, 0 policies, 0 sessions, 0 cache entries, 0 snapshots, 0 chunks, 0 running jobs, 0 persisted streams, 0 segments, 0 segment mappings. library!WindowsService_98!1234!11/26/2023-00:00:01:: i INFO: Call to CleanBatch() ends library!WindowsService_98!1218!11/26/2023-00:10:01:: i INFO: Call to CleanBatch() library!WindowsService_98!1218!11/26/2023-00:10:01:: i INFO: Cleaned 0 batch records, 0 policies, 1 sessions, 0 cache entries, 1 snapshots, 14 chunks, 0 running jobs, 0 persisted streams, 9 segments, 9 segment mappings. library!WindowsService_98!1218!11/26/2023-00:10:01:: i INFO: Call to CleanBatch() ends library!WindowsService_98!d00!11/26/2023-00:20:01:: i INFO: Call to CleanBatch() library!WindowsService_98!d00!11/26/2023-00:20:01:: i INFO: Cleaned 0 batch records, 0 policies, 0 sessions, 0 cache entries, 0 snapshots, 0 chunks, 0 running jobs, 0 persisted streams, 0 segments, 0 segment mappings. library!WindowsService_98!d00!11/26/2023-00:20:01:: i INFO: Call to CleanBatch() ends library!ReportServer_0-127!2558!11/26/2023-00:21:18:: i INFO: RenderForNewSession('/Hampton.Common.Reports/BOL') processing!ReportServer_0-127!2558!11/26/2023-00:21:18:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 19., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 19. processing!ReportServer_0-127!2558!11/26/2023-00:21:18:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 54., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 54. processing!ReportServer_0-127!2558!11/26/2023-00:21:18:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 61., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 61. processing!ReportServer_0-127!2558!11/26/2023-00:21:18:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 62., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 62. processing!ReportServer_0-127!2558!11/26/2023-00:21:19:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 1., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 1. processing!ReportServer_0-127!2558!11/26/2023-00:21:19:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 2., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 2. processing!ReportServer_0-127!2558!11/26/2023-00:21:19:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 1., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 1. processing!ReportServer_0-127!2558!11/26/2023-00:21:19:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 2., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 2. library!WindowsService_98!1234!11/26/2023-00:30:01:: i INFO: Call to CleanBatch() ------------------- sample log file end -------------------------
In the last month, the Splunk Threat Research Team has had 2 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.17.0 and v4.18.0). With these releases, there a... See more...
In the last month, the Splunk Threat Research Team has had 2 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.17.0 and v4.18.0). With these releases, there are 51 new analytics, 5 new analytic stories, 18 updated analytics, and 4 updated analytic stories now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include: The "Office 365 Persistence Mechanisms" analytic story includes a group of detections that delve into attackers' tactics and techniques to maintain prolonged unauthorized access within the O365 environment. Persistence in this context refers to adversaries' methods to keep their foothold after an initial compromise. The "Windows Attack Surface Reduction" analytic story includes a group of detections for Attack Surface Reduction (ASR) events. ASR is a feature of Windows Defender Exploit Guard that prevents actions and apps that are typically used by exploit-seeking malware to infect machines. When an action is blocked by an ASR rule, an event is generated. The "Kubernetes Security" analytic story encompasses a range of detections that highlight the escalating challenges when securing containerized environments. Key detections include Kubernetes Abuse of Secret by Unusual Location, User Agent, User Group, and Username, which pinpoints attempts to exploit secrets via anomalous parameters. Four new analytics delve into the intricacies of MFA security in the PingID environment. These detections, contributed by @nterl0k, cover scenarios like Mismatch Auth Source and Verification Response, Multiple Failed MFA Requests, New MFA Method Post-Credential Reset, and Registration of New MFA Methods, highlighting the evolving landscape of digital authentication security. New Analytics (51) O365 Service Principal New Client Credentials O365 Mailbox Read Access Granted to Application O365 Tenant Wide Admin Consent Granted O365 Application Registration Owner Added O365 Mailbox Inbox Folder Shared with All Users O365 Advanced Audit Disabled O365 High Number Of Failed Authentications for User O365 Multiple Users Failing To Authenticate From Ip O365 User Consent Blocked for Risky Application O365 User Consent Denied for OAuth Application O365 Mail Permissioned Application Consent Granted by User O365 ApplicationImpersonation Role Assigned O365 File Permissioned Application Consent Granted by User O365 Multiple Failed MFA Requests For User O365 High Privilege Role Granted O365 New MFA Method Registered O365 Multiple AppIDs and UserAgents Authentication Spike O365 Block User Consent For Risky Apps Disabled O365 Multi-Source Failed Authentications Spike Powershell Remote Services Add TrustedHost Windows Modify Registry AuthenticationLevelOverride Windows Modify Registry DisableRemoteDesktopAntiAlias Windows Modify Registry DisableSecuritySettings Windows Modify Registry DontShowUI Windows Modify Registry ProxyEnable Windows Modify Registry ProxyServer Windows Archive Collected Data via Rar Windows Indicator Removal Via Rmdir Windows Credentials from Password Stores Creation Windows Credentials from Password Stores Deletion Windows Defender ASR Rules Stacking Windows Defender ASR Rule Disabled Windows Defender ASR Registry Modification Windows Defender ASR Block Events Windows Defender ASR Audit Events Windows Masquerading Msdtc Process Windows Parent PID Spoofing with Explorer Web Remote ShellServlet Access Splunk RCE via User XSLT PingID Mismatch Auth Source and Verification Response (External Contributor: @nterl0k) PingID Multiple Failed MFA Requests For User  (External Contributor: @nterl0k) PingID New MFA Method After Credential Reset (External Contributor: @nterl0k) PingID New MFA Method Registered For User (External Contributor: @nterl0k) Kubernetes Abuse of Secret by Unusual Location Kubernetes Abuse of Secret by Unusual User Agent Kubernetes Abuse of Secret by Unusual User Group Kubernetes Abuse of Secret by Unusual User Name Kubernetes Access Scanning Kubernetes Suspicious Image Pulling Kubernetes Unauthorized Access Windows Modify System Firewall with Notable Process Path New Analytic Stories (5) Office 365 Account Takeover Office 365 Persistence Mechanisms Windows Attack Surface Reduction Rhysida Ransomware Kubernetes Security Updated Analytics (18) High Number of Login Failures from a single source O365 Add App Role Assignment Grant User O365 Added Service Principal O365 Bypass MFA via Trusted IP O365 Disable MFA O365 Excessive Authentication Failures Alert O365 Excessive SSO logon errors O365 New Federated Domain Added O365 PST export alert O365 Suspicious Admin Email Forwarding O365 Suspicious Rights Delegation O365 Suspicious User Email Forwarding Splunk App for Lookup File Editing RCE via User XSLT Allow File And Printing Sharing In Firewall Azure AD PIM Role Assigned CMD Carry Out String Command Parameter Detect Use of cmd exe to Launch Script Interpreters Modification Of Wallpaper Updated Analytic Stories (4) DarkGate Malware NjRAT RedLine Stealer Amadey The team has also published the following blogs: Deploy, Test, Monitor: Mastering Microsoft Defender ASR with Atomic Techniques in Splunk Unmasking the Enigma: A Historical Dive into the World of PlugX Malware Splunk Security Content for Threat Detection & Response: Q3 Roundup Previous Security Content Roundups from the Splunk Threat Research Team For all our tools and security content, please visit research.splunk.com.  — The Splunk Threat Research Team
Announcement and Additional Resources here! Hey there, Community! We're excited to share Cisco AppDynamics Smart Agent, newly released in v23.11.0. It simplifies agent operations, enabling instal... See more...
Announcement and Additional Resources here! Hey there, Community! We're excited to share Cisco AppDynamics Smart Agent, newly released in v23.11.0. It simplifies agent operations, enabling installation, upgrade, and rollback right from the Controller UI. Plus, you can view Smart Agent inventory details along with other installed agents. There's also a CLI option for advanced configuration.  Getting started with Smart Agent is simple; just install it on each desired host.   Find Smart Agent Info We've put together a Smart Agent resource collection here in Community. For an overview, we recommend starting with Smart Agent: agent lifecycle management reimagined.  You'll also find the first pair of several real-life example how-to articles: Exploring an APM agent upgrade scenario with Smart Agent Exploring an APM agent installation scenario with Smart Agent We've also put together a collection of Frequently Asked Questions for Smart Agent here. Don't miss @Aaron.Schifman's clickable demo: Smart Agent: How easy is it? To learn more, see the complete Smart Agent documentation. Tell us what you think! Our whole team is eager to see how you use Smart Agent. Join the conversation here. We're open for questions! Best, Claudia Landivar Community Manager & Editor Ryan Paredez Community Manager
I was doing regular health checks in my Splunk deployment and found In indexing health is critical mainly due to the small bucket count, maxbucketSize has been set to auto, not sure what else might b... See more...
I was doing regular health checks in my Splunk deployment and found In indexing health is critical mainly due to the small bucket count, maxbucketSize has been set to auto, not sure what else might be the cause.  I'm new to the org and have little or no idea about the underlying architecture and implementation. 
How do I add sample data to a cloud based trial instance?
Hello, I unfortunately lost my webui password for admin account. What is the way to reset it ? I mean I know that on Splunk enterprise, there is used-seed, rest-api and even cli command but I can't ... See more...
Hello, I unfortunately lost my webui password for admin account. What is the way to reset it ? I mean I know that on Splunk enterprise, there is used-seed, rest-api and even cli command but I can't find any clue about Splunk UBA. Thanks for the help
Where can I download Splunk Universal Forwarder 9.0.7?
The problem is that there is a lag happening in the log shipping from our application to Splunk, after some investigation we realized that we can override the event time by providing _time property i... See more...
The problem is that there is a lag happening in the log shipping from our application to Splunk, after some investigation we realized that we can override the event time by providing _time property in the logs (ref:https://docs.splunk.com/Documentation/SCS/current/Search/Timestampsandtimeranges) and it should be UNIX epoch time (seconds). we did that but it didn’t have any effect on the event time and the time difference persists. It has been a while since we are testing a lot of possibilities yet none of them did the trick.
Hello All,  Currently we have setup the use case to send the emails whenever a condition is satisfied and an alert is fired up. My concern is whenever the email is received we are receiving the add... See more...
Hello All,  Currently we have setup the use case to send the emails whenever a condition is satisfied and an alert is fired up. My concern is whenever the email is received we are receiving the address in the FROM field as "abc.xyz+untrusted@jkl.com",   and we think that some mail boxes are not getting these emails from the specific untrusted email address,  please correct me if i am misunderstood.  Also, is there a way to add this "abc.xyz@jkl.com" to the trusted email group or something like that? or is there a different way to get the actual email address instead of the +untrusted email whenever an email is sent out from splunk. Hope this makes sense.  Thanks, 
Greetings, brave questers! As we continue our exciting journey with the Great Resilience Quest, it is time to announce our latest leaderboard standings and celebrate the 2nd round winners of the Ad... See more...
Greetings, brave questers! As we continue our exciting journey with the Great Resilience Quest, it is time to announce our latest leaderboard standings and celebrate the 2nd round winners of the Adventurer's Bounty and Champion's Tribute! 11th Leaderboard Update Shout out to these questers for their amazing progress in the past two weeks. Please keep the momentum! There are 3 more chances to make it to the leaderboard from now.  2nd Round Winners - Adventurer's Bounty  2nd Round Winners - Champion's Tribute   Congratulations to our 2nd round winners! Your dedication and skills have certainly paid off. Remember, each player can only win a specific type of prize once to ensure fair play and equal chances of victory for all. For the winners, we will reach out to you via email to request that you sign a winner affidavit. Once we receive your signature, we will send out your prizes. Apologies for the technical issue we encountered, which caused some delays in distributing the prizes to our first round winners. Finally, don't forget, there is one more round of winner announcements coming at the end of January, as the quest concludes. Stay engaged, keep competing, and fight for the prizes you deserve! We are amazed by the enthusiasm and resilience shown by our participants. Keep up the great work!  As the holiday season approaches, I would also like to send my best wishes to you and your family. Happy Holidays! Best regards, Customer Success Marketing
I'd like to set up an email notification for the following dashboard, specifically on Saturdays and Sundays at intervals of 3 hours. Since I receive files only on these days, this schedule aligns wit... See more...
I'd like to set up an email notification for the following dashboard, specifically on Saturdays and Sundays at intervals of 3 hours. Since I receive files only on these days, this schedule aligns with our data delivery. Could someone guide me on configuring this setup?    
index="********" message_type =ERROR correlation_id="*" | eval err_field1 = spath(_raw,"response_details.body") | eval err_field2 = spath(_raw,"response_details") | eval err_field3 = spath(_raw,"... See more...
index="********" message_type =ERROR correlation_id="*" | eval err_field1 = spath(_raw,"response_details.body") | eval err_field2 = spath(_raw,"response_details") | eval err_field3 = spath(_raw,"error") | eval err_field4 = spath(_raw,"message") | eval err_final=coalesce(err_field1,err_field2,err_field3,err_field4) | table err_field1 err_field2 err_field3 err_field4 err_final i have the fields populating for err_field3 and err_field4.. but its not populating in the err_final. Attached the screenshot for reference
Hello, Can you one help me with the custom visualization like below image, any addon or something would be grateful. Thanks in Advance.
Hi, My dashboard seems to be taking around 1.3 mints to load the data for multiple panels and sometime it takes around 4 mints to load the data. My client come up with an requirement to get 'auto re... See more...
Hi, My dashboard seems to be taking around 1.3 mints to load the data for multiple panels and sometime it takes around 4 mints to load the data. My client come up with an requirement to get 'auto refresh" feature  enabled for the dashboard with 15 mints intervals. I used base search and the base search intern uses the | tstats. I am not familiar with save search or scheduled serch or loadjob. Please could you advise? how to implement the feature Thanks, Selvam.  
My OS system is CentOS 7, with Appdynamics 21.4.4 installed. I plan to migrate to OS Ubuntu 2204 LTS and upgrade to Appdynamics 23.9. Can I use HA to migrate? Or should I use backup & restore to do t... See more...
My OS system is CentOS 7, with Appdynamics 21.4.4 installed. I plan to migrate to OS Ubuntu 2204 LTS and upgrade to Appdynamics 23.9. Can I use HA to migrate? Or should I use backup & restore to do the transfer? I tried to migrate in different ways several times, but still can do it.
Hello, I am creating a dashboard in Dashboard Studio and wanted to have just 3 timeranges available for a user: Last Month ("-mon@mon" to "@mon") Last to last Month ("-2mon@mon" to "-mon@mon") Mo... See more...
Hello, I am creating a dashboard in Dashboard Studio and wanted to have just 3 timeranges available for a user: Last Month ("-mon@mon" to "@mon") Last to last Month ("-2mon@mon" to "-mon@mon") Month to date ("@mon" to "@d") I think I can get all these options in the default timerange input as well, but I do not want User to select anything other than these 3 options - and this is something I cannot prevent when using default timerange input (or at least I am not sure how do I do that, until and unless I create a separate user role with specific time ranges allowed, may be). so as a work around, I have created a drop down with these 3 token labels and values are set in form of relative_time function, i.e., where tempDt>=relative_time(now(),"-mon@mon") and tempDt<relative_time(now(),"@mon") where tempDt>=relative_time(now(),"@mon") and tempDt<relative_time(now(),"@d") where tempDt>=relative_time(now(),"-2mon@mon") and tempDt<relative_time(now(),"-mon@mon") in the main search (not available on demo dashboard definition shared here) I am using a variable tempDt and then one of the token value will filter data based on this tempDt. index=abc earliest="-2mon@mon" ```there exist a field called tempDt``` $timerange$ All this worked as expected - no issues.  ------------------------------------------------------------------------------------------------------------------------ Now, I also want to display Month Name in Dashboard Header (created using Markdown Text) based on Token Value,  if Last Month then strftime(relative_time(now(),"-mon@mon"),"%b-%Y") if Last to last month then strftime(relative_time(now(),"-2mon@mon"),"%b-%Y") if Month To Date then strftime(relative_time(now(),"@mon"),"%b-%Y") Please see below dashboard source code that I have tried but Markdown text is not populated with the Month Name - when I run the same search outside dashboard, it works ok. { "visualizations": { "viz_q7o2tu52": { "type": "splunk.markdown", "options": { "markdown": "### **Monthly Service Review ($MD Search:result.month$)**" } } }, "dataSources": { "ds_zBQAeHol": { "type": "ds.search", "options": { "enableSmartSources": true, "query": "| makeresults \n| eval temp=case(LIKE($timerange|s$,\"%-2mon@mon%\"),\"-2mon@mon\",LIKE($timerange|s$,\"%-mon@mon%\"),\"-mon@mon\",LIKE($timerange|s$,\"%@d%\"),\"@mon\",true(),\"@d\")\n| eval epoch=relative_time(now(),$temp$)\n| eval month=strftime(epoch,\"%b-%Y\")\n| table month", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "MD Search" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "options": { "items": [ { "label": "Last Month", "value": "where tempDt>=relative_time(now(),\"-mon@mon\") and tempDt<relative_time(now(),\"@mon\")" }, { "label": "Month to Date", "value": "where tempDt>=relative_time(now(),\"@mon\") and tempDt<relative_time(now(),\"@d\")" }, { "label": "Last to last Month", "value": "where tempDt>=relative_time(now(),\"-2mon@mon\") and tempDt<relative_time(now(),\"-mon@mon\")" } ], "defaultValue": "where tempDt>=relative_time(now(),\"-mon@mon\") and tempDt<relative_time(now(),\"@mon\")", "token": "timerange" }, "title": "Time Range", "type": "input.dropdown" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "input_global_trp", "type": "input", "position": { "x": 640, "y": 130, "w": 198, "h": 82 } }, { "item": "viz_q7o2tu52", "type": "block", "position": { "x": 540, "y": 30, "w": 400, "h": 90 } } ], "globalInputs": [] }, "description": "", "title": "MD Markdown Token Test" }   Can you please help how can I achieve this? Thank you. Regards, Madhav
This was my initial search. I cannot compare the two fields "srcdomain = destdomain" because when I intend to use eval my value is output as null. Thanks everyone  
日本語で失礼します。 私はLinuxにUFをインストールし、input.confを以下のように設定しました。 [monitor:///xxxx/] whitelist = xxxx_list_<yyyymmdd>.csv UFを再起動し、Splunk list monitorで対象のファイルが表示されていることを確認しました。 しかし、翌日に上記と同じコマンドでモニター状況を確認したと... See more...
日本語で失礼します。 私はLinuxにUFをインストールし、input.confを以下のように設定しました。 [monitor:///xxxx/] whitelist = xxxx_list_<yyyymmdd>.csv UFを再起動し、Splunk list monitorで対象のファイルが表示されていることを確認しました。 しかし、翌日に上記と同じコマンドでモニター状況を確認したところ、新しく作成されたファイル(xxxx_list_20241212.csv)が表示されませんでした。 こちら解決方法ご存知の方、 ご教示いただけますでしょうか
I would like to be able to see the daily traffic flow rate of Splunk Enterprise on my dashboard. Ideally, I would like to be able to see the traffic flow per forwarder, but at the very least I would... See more...
I would like to be able to see the daily traffic flow rate of Splunk Enterprise on my dashboard. Ideally, I would like to be able to see the traffic flow per forwarder, but at the very least I would like to see the overall traffic flow. Is this possible?
Hello All, I have a search question. I have a csv file that returnds data. the ID field if there is no data - I want to have a table which shows 4 columns: NAME,STATUS,DATE,ACTION. These come from ... See more...
Hello All, I have a search question. I have a csv file that returnds data. the ID field if there is no data - I want to have a table which shows 4 columns: NAME,STATUS,DATE,ACTION. These come from the csv file header line. If the ID >0 I want to show these columns: DATE-Changed,ID,NAME,DATE_DOWN,ACTION. I have not yet seen how I might do this. What I need, in a sense, it two searches, one when ID=0, and one when ID>0. Any suggestions?   Thanks, EWHOLZ