All Topics

Top

All Topics

NOVEMBER 2023  OpenTelemetry Insights Page Now Provides Visibility to Your GCP and Azure Hosts  Splunk Observability customers with GCP and Azure integrations now have access to th... See more...
NOVEMBER 2023  OpenTelemetry Insights Page Now Provides Visibility to Your GCP and Azure Hosts  Splunk Observability customers with GCP and Azure integrations now have access to the OpenTelemetry insights view within the UI. The OpenTelemetry insights page, accessed via the Data Management module, gives you a complete view of your host’s inventory, including a full list of instances, the deployment status of the OpenTelemetry Collector for each instance, and the version. This view is already available for AWS EC2 hosts.  Discover Our New and Improved Time Picker! 250,000 - That’s the number of times the Time Picker component within Observability Cloud is clicked on each month. A critical feature in the investigation journey, Time Picker allows engineers to easily and quickly determine the time frame of their analyses. For this reason, we’re launching a new version that not only fixes bugs but also includes new functionalities such as type-ahead behavior and additional timestamp formats that enhance developer experience and accelerate workflows. Find out more here! Integrating REST endpoints with Splunk On-Call Available now in the Observability Use Case Explorer!  A brand new use case for Splunk On-Call with a focus on sending customized alerts and incident details from your proprietary and open-source monitoring tools into the Splunk On-Call timeline.  Read all about it here! NOW AVAILABLE: Unified Identity enhancements Get more control over which Splunk Cloud user can access Observability Cloud! We’re introducing a new custom “o11y_access” role for admins to restrict who can create an Observability Cloud user and enjoy the Unified Identity/SSO capability. Check out our updated docs for more details. ICYMI: Check Out The Latest Observability Blog Posts Announcing the General Availability of Splunk RUM Session Replay Why Does Observability Need OTEL? The Great Resilience Quest continues at full momentum   The Great Resilience Quest continues to welcome challengers until the end of January 2024. This gamified adventure teaches you how to implement key Splunk use cases on the path to digital resilience. Conquer each level by completing bite-sized learning activities and quizzes. With amazing prizes still up for grabs, every moment counts. Join the quest today!    Platform Updates Build Digital Resilience Through Expanded Access to Decentralized Data In his recent blog, Tom Casey, SVP Products & Technology for Splunk discusses several recent Splunk Platform innovations enabling customers to build digital resilience through expanded access to decentralized data, enabling better understanding of customer-facing issues, regardless of whether the data sits in Splunk or cost effective Amazon S3 storage, facilitating compliance with data sovereignty requirements.   Explore the new Log Analytics for IT Troubleshooting Splunk Use Case Page Splunk Observability Log Analytics for It Troubleshooting allows customers to get comprehensive visibility, at scale with Splunk Platform. Accelerate innovation and IT troubleshooting in complex hybrid environments. Explore the use case here. Splunk Observability – The Latest Innovations To Perfect UX Splunk helps you prioritize the right issues and make faster and better decisions through proactive and smarter alerting, richer data, and simpler workflows. Join this webinar for a first look at these new features that can help you quickly resolve customer-facing issues to deliver great user experiences (UX). Featuring product demonstrations of Session Replay, Edge Processor, OpenTelemetry, and Federated Search for Amazon S3. Splunk App for Data Science and Deep Learning - What’s New in Version 5.1.1 In the ever-evolving world of data science, keeping your tools and software up to date is essential. This ensures that you have access to the latest features, security updates and bug fixes. The team behind our data science app has been hard at work to bring you the most robust and secure version yet. Explore our recent blog to dive into what's new in the recently released Splunk App for Data Science and Deep Learning (DSDL) version 5.1.1 available on Splunkbase. Machine Learning in General, Trade Settlement in Particular The recent T+1 compliance directive —which mandates that all USA trades starting in May 2024 be settled in at most one day — is the driving force behind wanting to provide resilience to the trade settlement process. Explore this hands on blog on using Splunk Machine Learning Toolkit to predict whether a trade settlement in the financial services industry will fail to be completed.  Tech Talks, Office Hours and Lantern Tech Talks OpenTelemetry: What’s Next. Logs, Profiles, and More Register Now  and join us on Tuesday, November 14, 2023. You’ll learn about OpenTelemetry's new logging functionality, including its two logging paths, the benefits of each, real-world production examples and so much more! ICYMI: Starting With Observability: OpenTelemetry Best Practices. Watch the Replay   Community Office Hours Join our upcoming Community Office Hour sessions, where you can ask questions and get guidance.  Security: SOAR - Wed, Nov 29  (Register here) Splunk Search - Wed, Dec 13  (Register here)   Splunk Lantern  In this month’s blog we’re highlighting everything that’s new on Lantern this month, with new data articles for MS Teams as well as brand new use cases, product tips and data descriptors. Read on to see what’s new.   Education Corner A Steady Drumbeat of New and Updated Splunk Training  Can you hear it? That’s the sound of new Splunk Education courses dropping on a regular! You can always search the Splunk Training and Enablement Platform (STEP) for courses that align with your observability learning journey, or check out our October Release Announcements. And, don’t forget to check in with your Org Manager if you’re looking to enroll in paid training using your company’s Training Units. Get curious about what's possible with Splunk.   Hola! Say Hello to Our Translated Content It’s a big world out there – with 8 billion people and about 7,000 languages spoken. Splunk Education is determined to get closer to as many of these people as possible by publishing training and certification in more diverse languages. We are pleased to share that we now offer free, self-paced eLearning courses with Spanish captions. Watch for more translated content and captions coming soon, Mucho gusto! Talk with us about Splunk!   The Splunk product design team wants to learn about how you use our products. If you’re interested in contributing, please fill out this quick questionnaire so we can reach out to you. This may take such forms as a survey, receiving an email to schedule an interview session, or some other type of research invitation. We look forward to hearing from you!     Until Next Time, Happy Splunking
NOVEMBER 2023  GovSummit 2023 Registration is open for Splunk’s largest, free annual event for government and agency leaders and decision-makers. We’re looking forward to bringing t... See more...
NOVEMBER 2023  GovSummit 2023 Registration is open for Splunk’s largest, free annual event for government and agency leaders and decision-makers. We’re looking forward to bringing together government IT and security professionals for industry-leading event. Hear firsthand from government and industry experts about how agencies are adapting and thriving — and building digital resilience to improve their cyber strategy.     Event Details: Ronald Reagan Building and International Trade Center, Washington, D.C. Wednesday, December 14, 2023 | 7:30 am - 4:30 pm EST Workshops: Splunk offers free virtual workshops. Join the top technical experts at Splunk for hands-on learning on Splunk SOAR, Insider Threat, and Enterprise Security.  Times vary and there is no cost to attend.   11/16 Security Lunch & Learn, 1:00 - 4:00 ET, Registration Page 11/30 Splunk 4 Rookies, 1:00 - 4:00 ET, Registration Page 12/7 IT Foundations, 1:00 - 2:30 ET, Registration page Booth and Table Top Events: Be sure to stop by and see Splunk to learn about the latest trends at these industry events: 11/15 NLC City Summit 2023, Atlanta, GA 11/16 CyberTalks, Washington, DC 11/27 AWS re:Invent, Las Vegas, NV 12/3-6 Center for Technology CIO Roundtable, Sea Island, GA 12/7 Pennsylvania Digital Government Summit, Harrisburg, PA 12/13 CMS Industry Days, Baltimore, MD 12/14 GovSummit, Washington, DC 12/15 DoDIIS, Portland, OR New Research Report for Today’s Security Leaders: The CISO Report  is officially here!  Check out the results of our original research including emerging trends, threats and strategies that offer insight for today’s security leaders. Platform Updates Build Digital Resilience Through Expanded Access to Decentralized Data In his recent blog, Tom Casey, SVP Products & Technology for Splunk discusses several recent Splunk Platform innovations enabling customers to build digital resilience through expanded access to decentralized data, enabling better understanding of customer-facing issues, regardless of whether the data sits in Splunk or cost effective Amazon S3 storage, facilitating compliance with data sovereignty requirements.   Model Assisted Threat Hunting Powered by PEAK & Splunk AI Accelerate threat hunting with Splunk AI as a catalyst. Join us to learn how to leverage the PEAK threat hunting framework and Splunk AI to find malware dictionary-DGA domains. Learn the basics of the PEAK threat hunting framework developed by Splunk’s SURGe security research team, understand the power Splunk AI can bring to your threat hunts and see how to create automated detections from your successful hunts. Explore the new Log Analytics for IT Troubleshooting Splunk Use Case Page Splunk Observability Log Analytics for It Troubleshooting allows customers to get comprehensive visibility, at scale with Splunk Platform. Accelerate innovation and IT troubleshooting in complex hybrid environments. Explore the use case here. Splunk Observability – The Latest Innovations To Perfect UX Splunk helps you prioritize the right issues and make faster and better decisions through proactive and smarter alerting, richer data, and simpler workflows. Join this webinar for a first look at these new features that can help you quickly resolve customer-facing issues to deliver great user experiences (UX). Featuring product demonstrations of Session Replay, Edge Processor, OpenTelemetry, and Federated Search for Amazon S3. Splunk App for Data Science and Deep Learning - What’s New in Version 5.1.1 In the ever-evolving world of data science, keeping your tools and software up to date is essential. This ensures that you have access to the latest features, security updates and bug fixes. The team behind our data science app has been hard at work to bring you the most robust and secure version yet. Explore our recent blog to dive into what's new in the recently released Splunk App for Data Science and Deep Learning (DSDL) version 5.1.1 available on Splunkbase. Tech Talks, Office Hours and Lantern Tech Talks OpenTelemetry: What’s Next. Logs, Profiles, and More Register Now  and join us on Tuesday, November 14, 2023. You’ll learn about OpenTelemetry's new logging functionality, including its two logging paths, the benefits of each, real-world production examples and so much more! Advance Your App Development with the Visual Studio Code Extension Register Now  and join us on Wednesday, November 15, 2023. See the latest on the Visual Studio Code Extension for Splunk SOAR and how you can make developing apps a breeze. Streaming Lookups with Splunk Edge Processor Register Now  and join us on Thursday, November 16, 2023 to learn how best to leverage lookups to optimize costs and maintain data fidelity, explore use cases for this capability that drive business outcomes, and review other ways to optimize your data management strategy using Edge Processor. ICYMI:  What’s New in Splunk SOAR 6.2?  Watch the Replay Starting With Observability: OpenTelemetry Best Practices. Watch the Replay   Community Office Hours Join our upcoming Community Office Hour sessions, where you can ask questions and get guidance.  Security: SOAR - Wed, Nov 29  (Register here) Splunk Search - Wed, Dec 13  (Register here)   Splunk Lantern  In this month’s blog we’re highlighting some great new updates to our Getting Started Guide for Enterprise Security (ES) that provide you with easy ways to get going on this powerful platform, as well as new data articles for MS Teams. As usual, we’re also sharing the rest of the new articles we’ve published this month.  Read on to see what’s new. Education Corner Creating Inclusive Learning Spaces at Splunk October was National Disability Employment Awareness Month (NDEAM). At Splunk, we are grateful that our community is made up of all types of people, with all types of experiences and points of view. This diversity creates interest, sparks innovation, and fosters growth. Find out how Splunk supports NDEAM and weaves this awareness into its Splunk Education programs in our latest blog.    Hola! Say Hello to Our Translated Content It’s a big world out there – with 8 billion people and about 7,000 languages spoken. Splunk Education is determined to get closer to as many of these people as possible by publishing training and certification in more diverse languages. We are pleased to share that we now offer free, self-paced eLearning courses with Spanish captions. Watch for more translated content and captions coming soon, Mucho gusto! Talk with us about Splunk! The Splunk product design team wants to learn about how you use our products. If you’re interested in contributing, please fill out this quick questionnaire so we can reach out to you. This may take such forms as a survey, receiving an email to schedule an interview session, or some other type of research invitation. We look forward to hearing from you! Want to learn more about resources from Splunk!  View Our Infographic to learn, get help and play with Splunk. Until Next Time, Happy Splunking
I have created an app for a team that I work with, and have set up mapping from our SAML auth so that the people on the team get a role that has access to the app. I would like for when these folks ... See more...
I have created an app for a team that I work with, and have set up mapping from our SAML auth so that the people on the team get a role that has access to the app. I would like for when these folks log in (they only have this one role, no other roles -- not even the default user role), they would land on the home page for the app.  As I understand it, that's supposed to be accomplished with the default_namespace parameter, set in the $SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf. In a regular browser window, now, when they log in, they get a 404 page for the app's home page (en-US/app/<appname>/search).  If they do it in an incognito/private browsing window, they land on the Launcher app and then then can navigate to the app and it works just fine.  The app's home page exists and is absolutely NOT a 404; after logging in in incognito, the URL they get when they manually navigate to the app is identical to the the link they're landed on when logging in without incognito.  (Ideally, I don't want these users to have access to the Launcher app, even.  But for now, they have to, in order to work around this.) We have a distributed environment (multiple indexers, multiple load-balanced search heads with a VIP).  This is the first time I've worked in a distributed environment.  So I'm assuming it's something to do with that. Any tips on what I'm doing wrong?
Hi, When using jdk8+ javaagent version 22.12.0,  I see below error $ java -javaagent:/cache/javaagent.jar -version Unable to locate appagent version to use - Java agent disabled openjdk version ... See more...
Hi, When using jdk8+ javaagent version 22.12.0,  I see below error $ java -javaagent:/cache/javaagent.jar -version Unable to locate appagent version to use - Java agent disabled openjdk version "1.8.0_382" OpenJDK Runtime Environment (Zulu 8.72.0.17-CA-linux64) (build 1.8.0_382-b05) OpenJDK 64-Bit Server VM (Zulu 8.72.0.17-CA-linux64) (build 25.382-b05, mixed mode)   What is the compatible javaagent version for above Java version.
Example logs 2022-08-19 08:10:53.0593|**Starting** 2022-08-19 08:10:53.5905|fff 2022-08-19 08:10:53.6061|dd 2022-08-19 08:10:53.6218|Shutting down 2022-08-19 08:10:53.6218|**Starting** 2022-08-... See more...
Example logs 2022-08-19 08:10:53.0593|**Starting** 2022-08-19 08:10:53.5905|fff 2022-08-19 08:10:53.6061|dd 2022-08-19 08:10:53.6218|Shutting down 2022-08-19 08:10:53.6218|**Starting** 2022-08-19 08:10:53.6374|fffff 2022-08-19 08:10:53.6686|ddd 2022-08-19 08:10:53.6843|**Starting** 2022-08-19 08:10:54.1530|aa 2022-08-19 08:10:54.1530|vv   From this I have created three columns Devicenumber,  _time ,Description If ** Starting ** message has followed by "Shutting down" mean, it should classify as good and if Starting message has not Shutting down mean, it should classify as bad.   From the above example, there should be 2 bad and one good.   If there is only one row which has only Starting and no shutting down recorded, in that case also , it should classify as bad
I'm trying to run a lookup against a list of values in an array.  I have a CSV which look as follows: id x y 123 Data Data2 321 Data Data2 456 Data3 Data3   The field from t... See more...
I'm trying to run a lookup against a list of values in an array.  I have a CSV which look as follows: id x y 123 Data Data2 321 Data Data2 456 Data3 Data3   The field from the search is is an array which looks as follows: ["123", "321", 456"] I want to map the lookup value.  Do I need to iterate over the field or can I use a lookup or is the best option?
I have a working query that uses Transaction to find the Starting / Ending log event.  I am trying to make some changes but Transaction is not working as I expected. In my current working example I... See more...
I have a working query that uses Transaction to find the Starting / Ending log event.  I am trying to make some changes but Transaction is not working as I expected. In my current working example I am looking for a 'job name' and then the starting and ending log event. In my current code I am using one query: index=anIndex sourcetype=aSourcetype aJobName AND ("START of script" OR "COMPLETED OK"). This works fine when there are no issues but now if a job fails there will be multiple "START of script" and only one 'COMPLETED OK' event. So, I tried reworking my query to be as follows to only get the most recent of either the start / completed log event. index=anIndex sourcetype=aSourcetype aJobName AND "START of script" | head 1 | append [ index=anIndex sourcetype=aSourcetype aJobName AND "COMPLETED OK" | head 1 ] But when I get to the part of creating a transaction the transaction only has the Starting log event ? | rex "(?<event_name>(START of script)|(COMPLETED OK))" | eval event_name=CASE(event_name="START of script", "script_start", event_name="COMPLETED OK", "script_complete") | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval {event_name}_time=_time | rex field=_raw "Batch::(?<batchJobName>[^\s]*)" | transaction keeporphans=true host batchJobName startswith=(event_name="script_start") endswith=(event_name="script_complete")   Is the use of | append [...] the cause ? If append cannot be used for transaction what other way can I get the data Im looking for ?
Hi , How we can fix this issue in ES SH "Health Check: msg="A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install... See more...
Hi , How we can fix this issue in ES SH "Health Check: msg="A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py" stanza="default" Thanks..
I'm having some trouble coming up with the SPL for the following situation: I have some series of events with a timestamp. These events have a field extracted with a value of either "YES" or "NO". W... See more...
I'm having some trouble coming up with the SPL for the following situation: I have some series of events with a timestamp. These events have a field extracted with a value of either "YES" or "NO". When sorted by _time we end up with a list like the following: _time Result time1 YES time2 NO time3 NO time4 YES   I'd like to count the duration between the "NO" values and the next "YES" value. So in this case we'd have a duration equal to time4 - time2.    index=* sourcetype=*mantec* "Computer name" = raspberry_pi06 "Risk name" = WS.Reputation.1 | sort _time | eval removed = if('Actual action' == "Quarantined", "YES", "NO") | streamstats reset_before="("removed==\"YES\"")" last(_time) as lastTime first(_time) as firstTime count BY removed | eval duration = round((lastTime - firstTime)/60,0) | table removed duration count _time     I've tried to lean on streamstats but the result is resetting the count at the last "NO" and doesn't count the time of the next "YES". We end up with a duration equal to time3 - time2. Also in the case of a single "NO" followed by a "YES" we get a duration of 0 which is also incorrect. I feel like I'm missing something extremely obvious.
Hi Folks, I am trying to figure out how to compare a single field based off another field called timestamp. I pull in data into Splunk via a JSON file that looks like the following: {"table": "Rou... See more...
Hi Folks, I am trying to figure out how to compare a single field based off another field called timestamp. I pull in data into Splunk via a JSON file that looks like the following: {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.240.0.0/30"} {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.241.0.0/30"} {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.242.0.0/30"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.240.0.0/30"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.241.0.0/31"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.245.0.0/30"} There will be tens or hundreds of unique dst values, all with the same timestamp value. What I'd like to be able to do is compare all dst values based off the timestamp value and compare that against a different set of dst values based off a different timestamp value. So far, I've been able to do an appendcols + simple eval function to compare stats values from one timestamp to another: index=<index> host=<host> sourcetype=_json timestamp=2023-11-07T12:25:43.208903 | stats values(dst) as old_prefix | appendcols [searchindex=<index> host=<host> sourcetype=_json timestamp=2023-11-10T13:12:17.529455 | stats values(dst) as new_prefix] | eval result=if(old_prefix=new_prefix, "pass","fail") | table old_prefix new_prefix result  And these are the results I get: old_prefix new_prefix result 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 10.240.0.0/30 10.241.0.0/31 10.245.0.0/30 fail   But what I'd really want to see is something along the lines of this: old_prefix new_prefix result present_in_old_table present_in_new_table 10.240.0.0/30 10.240.0.0/30 pass     10.241.0.0/30   fail 10.241.0.0/30     10.241.0.0/31 fail   10.241.0.0/31 10.242.0.0/30   fail 10.242.0.0/30     10.245.0.0/30 fail    10.245.0.0/30   Or this: old_prefix new_prefix result present_in_old_table present_in_new_table 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 10.240.0.0/30 10.241.0.0/31 10.245.0.0/30 fail 10.241.0.0/30 10.242.0.0/30 10.241.0.0/31 10.245.0.0/30   Is this something that could be reasonably done inside splunk? Please let me know if you have any further questions from me.
Hi i am trying to build a dashboard and I require a query to execute below some searches below:  1. REPORT FALSE POSITIVE PER TOTAL  2. REPORT MONTHLY SPLUNK ALERT HIGH - MEDIUM - LOW Can anyon... See more...
Hi i am trying to build a dashboard and I require a query to execute below some searches below:  1. REPORT FALSE POSITIVE PER TOTAL  2. REPORT MONTHLY SPLUNK ALERT HIGH - MEDIUM - LOW Can anyone help me in building the same?
Hi, I need some help in creating a table from the below json events. Can someone please help me on that? The table columns be like 'Name' and 'Count' and Name should hold "cruice", "crpice" etc. an... See more...
Hi, I need some help in creating a table from the below json events. Can someone please help me on that? The table columns be like 'Name' and 'Count' and Name should hold "cruice", "crpice" etc. and Count should have the corresponding values. Any help would be appreciated. Thanks   11/7/23 9:04:23.616 PM   "Year": { host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "Top30RequesterInOneYear": { host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "cruice": 2289449, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "crpice": 1465846, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "zathena": 1017289, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "qrecon": 864252, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app                                                                    
i dont know why splunk does not distribute clear instructions or tools to install and configure linux properly. redhat 9.x does not have init.d so you need to set boot-start with managed =1, but the... See more...
i dont know why splunk does not distribute clear instructions or tools to install and configure linux properly. redhat 9.x does not have init.d so you need to set boot-start with managed =1, but the service even if installed needs also systemctl ENABLE SplunkForwarder.service. In redhat 8 this is not the case.   the latest forwarder 9.1.1 also wont setup properly if you don't use user-seed.conf    I came out with this which does it job somehow, would be nice if someone would add his ideas to make it better.   (im running splunk as root for testing perpouses)         #!/bin/bash SPLUNK_FILE="splunkforwarder-9.1.1-64e843ea36b1.x86_64.rpm" rpm -ivh splunkforwarder-9.1.1-64e843ea36b1.x86_64.rpm ##change permission to root chown -R root:root /opt/splunkforwarder ##create user-seed.conf file that Splunk accepts to set admin credentials without user interaction sudo touch /opt/splunkforwarder/etc/system/local/user-seed.conf ##pass Splunk admin credentials into file sudo cat <<EOF > /opt/splunkforwarder/etc/system/local/user-seed.conf [user_info] USERNAME = admin PASSWORD = changeme EOF ##configure splunk /opt/splunkforwarder/bin/splunk set deploy-poll 192.168.68.129:8089 --accept-license --answer-yes --auto-ports --no-prompt /opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 0 /opt/splunkforwarder/bin/splunk start --no-prompt --answer-yes ##configure splunk Redhat 9.x #/opt/splunkforwarder/bin/splunk set deploy-poll 192.168.68.129:8089 --accept-license --answer-yes --auto-ports --no-prompt #/opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 1 #systemctl enable SplunkForwarder.service #systemctl start SplunkForwarder.service      
Hi!   I have a fallowing table: SESSION_ID SUBMITTED_FROM STAGE 1   submit 1 startPage someStage1 2   submit 2 page1 someStage1 2 page2 someStage2 How could ... See more...
Hi!   I have a fallowing table: SESSION_ID SUBMITTED_FROM STAGE 1   submit 1 startPage someStage1 2   submit 2 page1 someStage1 2 page2 someStage2 How could I count the number of SESSION_IDs that has SUBMITTED_FROM=startPage and STAGE=submit? So looking at the above table the outcome of that logic should be 2 SESSION_IDs
Hi All,   I have this query that runs  | tstats latest(_time) as LatestEvent where index=* by index, host | eval LatestLog=strftime(LatestEvent,"%a %m/%d/%Y %H:%M:%S") | eval duration = now() - ... See more...
Hi All,   I have this query that runs  | tstats latest(_time) as LatestEvent where index=* by index, host | eval LatestLog=strftime(LatestEvent,"%a %m/%d/%Y %H:%M:%S") | eval duration = now() - LatestEvent | eval timediff = tostring(duration, "duration") | lookup HostTreshold host | where duration > threshold | rename host as "src_host", index as "idx" | fields - LatestEvent | search NOT (index="cim_modactions" OR index="risk" OR index="audit_summary" OR index="threat_activity" OR index="endpoint_summary" OR index="summary" OR index="main" OR index="notable" OR index="notable_summary" OR index="mandiant")   The result is below   Now how do i add  index = waf_imperva . Thanks   Regards, Roger
Hi, Code is following index=asa host=1.2.3.4 src_sg_info=* | timechart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E" | rename user2 as "Mary E" | rename user3 as "Lucy E" ... See more...
Hi, Code is following index=asa host=1.2.3.4 src_sg_info=* | timechart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E" | rename user2 as "Mary E" | rename user3 as "Lucy E" If number of user is 0, then we know theres is no VPN user at all. Plan is to print it out together with active VPN user in timechart if possible. Try to explain how it looks below.                                                                                                             user2                                                                                                           user3                No Vpn user                                                                                                    No VPN user time .....................................................................................................................................................................................................................................    
We are having issus with Data models from Splunk_SA_CIM running for a very long time (hitting the limit) and causing out of memory (OOM) issues on our indexers. We have got brand new physical servers... See more...
We are having issus with Data models from Splunk_SA_CIM running for a very long time (hitting the limit) and causing out of memory (OOM) issues on our indexers. We have got brand new physical servers with 128 GB RAM and 48 Cores. The Enterprise security search head cluster has data models enabled which are both running on old and new hardware. Though we are getting OOM on new hardware and every run hits our 30+ min limit. Example on configuration for auth DMA: allow_old_summaries = true allow_skew = 5% backfill_time = -1d cron_schedule = */5 * * * * earliest_time = -6mon hunk.compression_codec = - hunk.dfs_block_size = 0 hunk.file_format = - manual_rebuilds = true max_concurrent = 1 max_time = 1800 Any tips on troubleshooting data models running for a very long time and causing out of memory (OOM)? Thanks!
Hello, I received the following error, the issue resolved itself after 4 hours.  The CSV file size is 54 MB.  Streamed search execute failed because: Error in 'lookup' command: Failed to re-open ... See more...
Hello, I received the following error, the issue resolved itself after 4 hours.  The CSV file size is 54 MB.  Streamed search execute failed because: Error in 'lookup' command: Failed to re-open lookup file: 'opt/splunk/var/run/searchpeers/[random number]/apps/[app-name]/lookups/test.csv' I am aware that there already a post in regards this, but I have more questions 1)  What is the cause of this error?     Is it because of the bug like mentioned in the old post below?  I am running 9.0.4, the bug should have been fixed https://community.splunk.com/t5/Splunk-Enterprise/Message-quot-Streamed-search-execute-failed-because-Error-in/m-p/569878 2) a) Is it because max_memtable_bytes in limits.conf  is 25MB? https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/Limitsconf b) How do I check limit.conf via GUI without admin role? c)  What does "Lookup files with size above max_memtable_bytes will be indexed on disk" mean?      Is it a good thing or bad? d) If I see cs.index.alive file auto generated, does it mean it's an indexed on disk? [random number]/apps/[app-name]/lookups/test.csv [random number]/apps/[app-name]/lookups/test.csv_[random number].cs.index.alive 3)  If I am not allowed to change any setting (increase 25MB limit),         what is the solution for this issue? I appreciate your help. Thank you
Hi Guys, I am performing a POC to import our parquet files into splunk, i have manage to write a python script to extract out the events aka raw logs to a df.  I also did a python script to pump the... See more...
Hi Guys, I am performing a POC to import our parquet files into splunk, i have manage to write a python script to extract out the events aka raw logs to a df.  I also did a python script to pump the logs via the syslog protocol to HF than to indexer. I am using the syslog method because i got many log type and i can do this by using the [udp://portnumber] to ingest multiple types of logs at once and to a different sourcetype however when i do this I am not able to retain the original datatime on the raw event but it is taking the datetime on the point i was sending the event. secondly i am using python because all these parquet files are storing in a s3 container hence it will be easier for me to loop thru the directory and extract the file.  I was hoping if anyone can help me out how can i get the original timestamp of the logs? Or there are other more effective way of doing this? sample logs from splunk after index, - Nov 10 09:45:50 127.0.0.1 <190>2023-09-01T16:59:12Z server1 server2 %NGIPS-6-430002: DeviceUUID: xxx-xxx-xxx heres my code to push the event via syslog.  import logging import logging.handlers import socket from IPython.display import clear_output #Create you logger. Please note that this logger is different from ArcSight logger. #my_loggerudp = logging.getLogger('MyLoggerUDP') #my_loggertcp = logging.getLogger('MyLoggerTCP') #We will pass the message as INFO my_loggerudp.setLevel(logging.INFO) #Define SyslogHandler #TCP #handlertcp = logging.handlers.SysLogHandler(address = ('localhost',1026), socktype=socket.SOCK_STREAM) #UDP handlerudp = logging.handlers.SysLogHandler(address = ('localhost',1025), socktype=socket.SOCK_DGRAM) #X.X.X.X =IP Address of the Syslog Collector(Connector Appliance,Loggers etc.) #514 = Syslog port , You need to specify the port which you have defined ,by default it is 514 for Syslog) my_loggerudp.addHandler(handlerudp) #my_loggertcp.addHandler(handlertcp) #Example: We will pass values from a List event = df["event"] count = len(event) #for x in range(2): for x in event: clear_output (wait=True) my_loggerudp.info(x) my_loggerudp.handlers[0].flush() count -= 1 print(f"logs left to be transmit {count}") print (x)  
Hi im trying to convert this search to show totals in hours instead of days/dates can anyone help me please? index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR... See more...
Hi im trying to convert this search to show totals in hours instead of days/dates can anyone help me please? index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR ProtrusionError>0 OR OffCentreError>0 | table _time OrderId ProtrusionError OffCentreError Dimension * | bin _time span=1d | eval _time=strftime(_time,"%d") | eval foo=ProtrusionError+OffCentreError+Dimension | chart sum(foo) as ErrorFrequency over Location by _time useother=f limit=100 | addtotals | sort 0 - Total _time | fields - TOTAL