All Topics

Top

All Topics

Hello, I’m experiencing a connectivity issue when trying to send events to my Splunk HTTP Event Collector (HEC) endpoint. I have confirmed that HEC is enabled, and I am using a valid authorization t... See more...
Hello, I’m experiencing a connectivity issue when trying to send events to my Splunk HTTP Event Collector (HEC) endpoint. I have confirmed that HEC is enabled, and I am using a valid authorization token. Here’s the command I am using: curl -k "https://[your-splunk-instance].splunkcloud.com:8088/services/collector/event" \ -H "Authorization: Splunk [your-token]" \ -H "Content-Type: application/json" \ -d '{"event": "Hello, Splunk!"}' Unfortunately, I receive the following error: curl: (28) Failed to connect to [your-splunk-instance].splunkcloud.com port 8088 after [time] ms: Couldn't connect to server Troubleshooting Steps Taken: Successful Connection from Another User: Notably, another user from a different system was able to successfully use the same curl command to reach the same endpoint Network Connectivity: I verified network connectivity by using ping and received a timeout for all requests. I performed a traceroute and found that packets are lost after the second hop. Despite these efforts, the issue persists. If anyone has encountered a similar issue or has suggestions for further troubleshooting, I would greatly appreciate your help. Thank you!
Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does an... See more...
Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does anyone know of any changes or settings which may have affected this and how it can be resolved?  Thanks  
On a new fresh deployment of O11y, we are following the guide from the setup wizard and running the following helm install command   helm install splunk-otel-collector --set="cloudProvider=aws,dist... See more...
On a new fresh deployment of O11y, we are following the guide from the setup wizard and running the following helm install command   helm install splunk-otel-collector --set="cloudProvider=aws,distribution=eks,splunkObservability.accessToken=xxxx-xxxxx,clusterName=eks-test-cluster,splunkObservability.realm=eu2,gateway.enabled=true,splunkPlatform.endpoint=https://my-splunk-cloud-hec-input,splunkPlatform.token=my-token,splunkObservability.profilingEnabled=true,environment=test,operator.enabled=true,agent.discovery.enabled=true" splunk-otel-collector-chart/splunk-otel-collector   However this fails with the following error: Error: INSTALLATION FAILED: template: splunk-otel-collector/templates/operator/instrumentation.yaml:2:4: executing "splunk-otel-collector/templates/operator/instrumentation.yaml" at <include "splunk-otel-collector.operator.validation-rules" .>: error calling include: template: splunk-otel-collector/templates/operator/_helpers.tpl:17:13: executing "splunk-otel-collector.operator.validation-rules" at <.Values.instrumentation.exporter.endpoint>: nil pointer evaluating interface {}.endpoint This seems to be because  _helpers.tpl is expecting a value for instrumentation.exporter.endpoint however the value according to the chart (and the documentation) is instrumentation.endpoint https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/operator/_helpers.tpl Line 13 is where it is mentioned. We have tried providing instrumentation.exporter.endpoint as an additional parameter - but instead get the error: Values don't meet the specifications of the schema(s) in the following chart(s): splunk-otel-collector: - instrumentation: Additional property exporter is not allowed  (Which is true - instrumentation.exporter.endpoint is not defined in here: https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/operator/instrumentation.yaml ) line 20 We also get the same error if we provide a complete values.yaml file with both formats of the instrumentation endpoint defined.    It looks like _helpers.tpl was edited to include this endpoint specification about a month ago, so surely we can not be the first people to be tripped up by this? Is there anything else I can try or do we need to wait for the operator to be fixed?  
I recently took my Splunk Power user exams and iI wish to know how long it takes to receive the results?  Thank you.
Does Splunk on Prem or cloud have a solution that allows users to be an Analyst when doing that role and sign in or elevate to Admin when carrying out Admin tasks.  This is something that many standa... See more...
Does Splunk on Prem or cloud have a solution that allows users to be an Analyst when doing that role and sign in or elevate to Admin when carrying out Admin tasks.  This is something that many standards require such as ISO27001 and others.  I hope someone can help or if not I will make the feature request.
Hi, brother, I have encountered a confusing problem regarding Postgres data integration. I want to execute this statement every two minutes to retrieve the latest results from the database, so I us... See more...
Hi, brother, I have encountered a confusing problem regarding Postgres data integration. I want to execute this statement every two minutes to retrieve the latest results from the database, so I use the following statement for filtering. SELECT history_uint.itemid, history_uint.value, interface.ip, items.name, hosts.host, TO_TIMESTAMP(history_uint.clock) AS clock_datetime FROM history_uint JOIN items ON history_uint.itemid = items.itemid JOIN hosts ON items.hostid = hosts.hostid JOIN interface ON interface.hostid = hosts.hostid WHERE history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) AND items.flags = 4 AND (items.name LIKE '%Bits sent%' OR items.name LIKE '%Bits received%') AND (hosts.host LIKE 'CN%' OR hosts.host LIKE 'os%') ORDER BY history_uint.clock DESC LIMIT 90000;  This SQL statement executes perfectly in database tools However, this SQL statement cannot be executed in dbxquery, and the error is unknown I found that the key reason is the following SQL statement: Among them, history_uint.clock>(FLOOR (EXTRACT (EPOCH) FROM NOW())) - 120) When I replaced (FLOOR (EXTRACT (EPOCHFAM NOW()) -120) with Unix timestamps, everything was fine I tried to replace it with other SQL statements, but they all failed. Please help me analyze this issue together. thank you. Add some environmental information: Postgres version: 14.9 Java version 21.0.5 DB connet version 3.18.1 Postgres JDBC version 1.2.1 thank you
嗨,兄弟, 我遇到了一个关于 postgres 数据集成的令人困惑的问题。 我想每两分钟执行一次此语句,以便从数据库中获取最新结果,因此我使用以下语句进行筛选。 SELECT history_uint.itemid,history_uint.value,interface.ip,items.name,hosts.host,TO_TIMESTAMP (history_uin... See more...
嗨,兄弟, 我遇到了一个关于 postgres 数据集成的令人困惑的问题。 我想每两分钟执行一次此语句,以便从数据库中获取最新结果,因此我使用以下语句进行筛选。 SELECT history_uint.itemid,history_uint.value,interface.ip,items.name,hosts.host,TO_TIMESTAMP (history_uint.clock) AS clock_datetime FROM history_uint JOIN items ON history_uint.itemid = items.itemid JOIN hosts ON items.hostid = hosts.hostid JOIN interface ON interface.hostid = hosts.hostid WHERE history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) AND items.flags = 4 AND (items.name LIKE '%Bits sent%' OR items.name LIKE '%Bits received%') AND (hosts.host LIKE 'CN%' OR hosts.host LIKE 'os%') ORDER BY history_uint.clock DESC LIMIT 90000; 此 SQL 语句在数据库工具中完美执行 但是,此 SQL 语句无法在 dbxquery 中执行,错误未知 我发现关键原因是以下 SQL 语句: 其中 history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) 当我用 Unix 时间戳替换 1 时,一切都很好 我尝试用其他 SQL 语句替换它,但它们都失败了。 请帮我一起分析这个问题。谢谢。 添加一些环境信息: Postgres 版本:14.9 java 版本 21.0.5 DB connet 版本 3.18.1 postgres JDBC 版本 1.2.1 谢谢    
In Splunk Cloud for one of my client environment, I'm seeing below message. TA-pps_ondemand Error: KV Store is disabled. Please enable it to start the data collection. Please help me with suitable ... See more...
In Splunk Cloud for one of my client environment, I'm seeing below message. TA-pps_ondemand Error: KV Store is disabled. Please enable it to start the data collection. Please help me with suitable solution.
Hey all, 2 part question here... I'm using the MLTK Smart Forecasting tool to model hard drive free space (time vs hard drive space). Currently the y-axis (i.e. free space %/free space MB) is auto... See more...
Hey all, 2 part question here... I'm using the MLTK Smart Forecasting tool to model hard drive free space (time vs hard drive space). Currently the y-axis (i.e. free space %/free space MB) is automatically adjusting its range to the graph produced by the forecast. I would want it to instead run from 0-100 (in the case of Free Space %) and be something reasonable for the Free Space MB-line. By extension, how would I get the graph to "tell or show" me where the x-intercept would be from the prediction and/or confidence interval (i.e. tell me the date the hard drive would theoretically run out of space; where Free Space %/MB = 0)? Attached is an image of the current output visualization should this help.  
Hi There,  I got issue Drill-down and Next Step are not read in Incident Review, i create Splunk Lab for Research And Development by myself. I just install Splunk Enterprise and Enterprise Security ... See more...
Hi There,  I got issue Drill-down and Next Step are not read in Incident Review, i create Splunk Lab for Research And Development by myself. I just install Splunk Enterprise and Enterprise Security (nothing another external apps) and i ingest DVWA to my Splunk. As you know DVWA has various vulnerabilities, and I want to utilize this as a log that I will then manage in Splunk. Therefore, I made a rule regarding uploading inappropriate files. The query is like this    index=lab_web sourcetype="apache:access" | rex field=_raw "\[(?<Time>[^\]]+)\] \"(?<Method>\w+) (?<Path>/DVWA/vulnerabilities/upload/[^/]+\.\w+) HTTP/1.1\" (?<Status>\d{3}) \d+ \"(?<Referer>[^\"]+)\" \"(?<UserAgent>[^\"]+)\"" | eval FileName = mvindex(split(Path, "/"), -1) | eval FullPath = "http://localhost" . Path | where match(FileName, "\.(?!jpeg$|png$)[a-zA-Z0-9]+$") | table Time, FileName, FullPath, Status   In that correlation, I added notables that were filled in from the drill-down and also the next step.  But why when I enter the incident review, the drill-down and next steps that I created are not readable? Maybe there is an application that I haven't installed or something else? I will attach my full correlation setting include with notable, drill-down, and Next Steps.   Splunk Enterprise Version : 9.3.1 Enterprise Security Version : 7.3.2
I'm trying to format timestamps in a table in dashboard studio. The original times are values such as: 2024-10-29T10:13:35.16763423-04:00 That is the value I see if I don't add a specific format.... See more...
I'm trying to format timestamps in a table in dashboard studio. The original times are values such as: 2024-10-29T10:13:35.16763423-04:00 That is the value I see if I don't add a specific format. If I add a format to the column : "YYYY-MM-DD HH:mm:ss.SSS Z" it is formatted as: 2024-10-29 10:13:35.000 -04:00 Why are the millisecond values zero? Here is the section of the source code for reference: "visualizations": { "viz_mfPU11Bg": { "type": "splunk.table", "dataSources": { "primary": "ds_xfeyRsjD" }, "options": { "count": 8, "columnFormat": { "Start": { "data": "> table | seriesByName(\"Start\") | formatByType(StartColumnFormatEditorConfig)" } } }, "context": { "StartColumnFormatEditorConfig": { "time": { "format": "YYYY-MM-DD HH:mm:ss.SSS Z" } } } } }, Any ideas what I'm doing wrong? Thanks, Andrew
The stream app can save pcaps using the configure packet stream. I was able to get packets saved using just IPs. Now I want to search for content based on some snort rules. For ascii content I am try... See more...
The stream app can save pcaps using the configure packet stream. I was able to get packets saved using just IPs. Now I want to search for content based on some snort rules. For ascii content I am trying to create new target using the field: content/contains by just putting in an ascii word. For hex values, there are no instructions. Do I use escape characters \x01\x02...., |01 02 ...| or a regular expression? Is there an example.
October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist o... See more...
October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist on what’s new with Splunk Education. We hope the updates about our courses, certification program, and self-paced training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability: Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go Training You Gotta Take SOAR like an eagle | Investigating Incidents with Splunk SOAR Ready to elevate your security skills and soar above the rest? Our instructor-led course, Investigating Incidents with Splunk SOAR, will have you flying through security incidents with confidence. In this three-hour course, you’ll learn SOAR concepts, investigations, running actions and playbooks, and managing cases with workflows. Designed for security practitioners, this hands-on course with labs equips you to respond, investigate vulnerabilities, and take action to keep your organization secure. All you need is some basic security operations knowledge, and you're ready to take flight.  Gotta learn to SOAR | Instructor-led with hands-on labs  Learn it blog-style | Splunk Learn  As if instructor-led training, eLearning with labs, free eLearning, Splunk University, Lantern, and YouTube weren’t enough ways to learn, we’ve got one more!  The Splunk Learn blog is another learn-at-your-own-pace-on-your-own-time option for tips, tutorials, and insights about Splunk and using Splunk.  Learn Blogs are a great complement to our Splunk Education curriculum, serving to reinforce what you may have learned in class or a way to test your skills with a new use case.  But please don’t be overwhelmed with all the ways we have your back, just know we come at it like “Yo, different strokes for different folks.”  Gotta learn from stories | Learn blogs Things You Needa Know How to be like Brandon | SMARTNESS Series, Episode 3 Ever listen to a podcast or watch a show where someone’s story totally resonates, and you think, Wow, that could be me? Well, that’s exactly the vibe of our SMARTNESS series. It’s like the career-growth version of “What Now?” with Trevor Noah. In Episode 3, we spotlight Brandon Sternfield. His journey through Splunk training, hands-on learning, and connecting with the incredible Splunk community helped him unlock new career possibilities. If you’re looking for that spark to ignite your own career growth, then you gotta’ meet Brandon. Needa be inspired | Meet Brandon Career moves | Vids and tips about Splunk Education Think of it like TikTok – quick, interesting, but with less dancing! In this short video, you’ll meet the duo of Alex and Ashley. Alex dives into how learning Splunk can fuel your career growth – no matter where you work – while Ashley shares her insider tips on standing out as a top candidate. Whether you're looking to expand your skills or land your next big role, Splunk Education and these expert tips are the perfect moves to help you succeed. Needa hear from experts | Splunk career tips Places You’ll Wanna Go Meet Duke Cyberwalker | A hero’s journey with Splunk We like to say, the lightsaber is to Luke as Splunk is to Duke. Curious yet? Then read Eric Fusilero’s latest blog about the thrilling saga of Duke Cyberwalker, a fresh college grad turned cybersecurity hero. It’s not just a creative and engaging narrative, it's a metaphor for daily professional challenges and growth. Join Duke on his epic adventure and discover how you, too, can transform the mundane into an adventure with Splunk. Go meet a hero | A blog of adventure Splunk YouTube | Short videos. Big impact. Wearing shorts to a red carpet event? Probably not the best look. But Splunk How-To YouTube Shorts? Now that’s always appropriate! We’re excited to announce a new series of bite-sized videos dedicated to helping you ace your Splunk Certification. Whether you’re preparing for the exam or just brushing up on key concepts, these quick, engaging videos give you the tips and tricks you need – in under 60 seconds! So, if you’re ready to level up your skills, check out the Splunk How-To YouTube Shorts and get one step closer to your certification goals. Go get them shorts | Splunk Certification in seconds Find Your Way | Learning Bits and Breadcrumbs Go Chat | Join our Community User Group Slack Channel Go Stream It  | The Latest Course Releases (Some with Non-English Captions!) Go Last Minute | Seats Still Available for ILT Go to Lantern | For Ways to Use Splunk More Efficiently Go to STEP | Get Upskilled Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Index It | Subscribe to our Newsletter Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.    Answer to Index This:  44 + 4/4 = 45
When creating an incident for a specific server, we want to include a link to that entity in IT Essentials Work however the URL appears to only be accessible using the entity_key.    Is there any si... See more...
When creating an incident for a specific server, we want to include a link to that entity in IT Essentials Work however the URL appears to only be accessible using the entity_key.    Is there any simple way to get the URL directly to an entity from the hostname or is it required to get the entity_key from the kvstore itsi_entities then combine that into the url?    In  Splunk App for Infrastructure, you could simply use the host name in the URL, but I cannot find any way to do this with ITEW.   Example URL:  https://<stack>.splunkcloud.com/en-US/app/itsi/entity_detail?entity_key=82570f87-9544-47c8-bc6g-e030c522barb Looking to see if there's a way to do something like this:  https://<stack>.splunkcloud.com/en-US/app/itsi/entity_detail?host=<hostname>   
Hi team, I have been experiencing issues with log ingestion in a Windows Server and I was hoping to get some advice. The files are generated in a mainframe and transmitted onto a local share in... See more...
Hi team, I have been experiencing issues with log ingestion in a Windows Server and I was hoping to get some advice. The files are generated in a mainframe and transmitted onto a local share in a windows server via TIBCO Jobs. The files are generated in 9 windows throughout the day -  3 files at a time varying in size from a few Mb to up to 3Gb. The solution has worked fine in lower environments, likely, because of looser file/folder restrictions, but in PROD, only one or two files per window get ingested. The logs in indicate that Splunk can't open or read the files:   The running theory is that the process that is writing the files to the disk is locking them so Splunk can't read them.  I'm currently reviewing the permission sets for the TIBCO Service Account and the Local System Account (Splunk UF runs as this account) in the lower environments to try and spot any differences that could be causing the issue - based on the information in the post below: https://community.splunk.com/t5/All-Apps-and-Add-ons/windows-file-locking/m-p/14126 in addition to that, I was exploring the possibility of user the "monitornohandle" stanza as it seems to fit the use case I am dealing with - monitor single files that don't get updated frequently. But I haven't been able to determine, based on documentation, if I can use wildcards in the filename - for reference, this is the documentation I'm referring to: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Data/Monitorfilesanddirectorieswithinputs.conf#MonitorNoHandle.2C_single_Windows_file I'd appreciate if I could get any insights from the community either regarding permission or the use of the "monitornohandle" input stanza. Thanks in advance,
Putting together a query that shows, on an individual alert level, the number of times the alert fired in a day and the average we were expecting.  Below is the query as it stands now, but I am looki... See more...
Putting together a query that shows, on an individual alert level, the number of times the alert fired in a day and the average we were expecting.  Below is the query as it stands now, but I am looking for a way to only show records from today/yesterday, instead of for the past 30 days.   Any help would be appreciated index=_audit action="alert_fired" earliest=-30d latest=now | eval date=strftime(_time, "%Y-%m-%d") | stats count AS actual_triggered_alerts by ss_name date | eventstats avg(actual_triggered_alerts) AS average_triggered_alerts by ss_name | eval average_triggered_alerts = round(average_triggered_alerts,0) | eval comparison = case( actual_triggered_alerts = average_triggered_alerts, "Average", actual_triggered_alerts > average_triggered_alerts, "Above Average", actual_triggered_alerts < average_triggered_alerts, "Below Average") | search comparison!="Average" | table date ss_name actual_triggered_alerts average_triggered_alerts | rename date as "Date", ss_name as "Alert Name", actual_triggered_alerts as "Actual Triggered Alerts", average_triggered_alerts as "Average Triggered Alerts"
Is the List of Clients Displayed on the Forwarder Managment Console Synchronized with other Splunk Servers?  We have two Splunk Deployment Servers, and Cluster Manager that show the same list.  Previ... See more...
Is the List of Clients Displayed on the Forwarder Managment Console Synchronized with other Splunk Servers?  We have two Splunk Deployment Servers, and Cluster Manager that show the same list.  Previously the DS only showed the clients it was actively connected with.  Did this feature get added in 9.2 when the DS was updated?
I have the following props which works fine in the "Add Data" GUI and a test file of logs: EVENT_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s EVENT_BREAKER_ENABLE = true LINE_BREAKER = ([\r\n]+)\... See more...
I have the following props which works fine in the "Add Data" GUI and a test file of logs: EVENT_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s EVENT_BREAKER_ENABLE = true LINE_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s MAX_TIMESTAMP_LOOKAHEAD = 25 SHOULD_LINEMERGE = false TIME_FORMAT = %d-%b-%Y %H:%M:%S.%3N TIME_PREFIX = named\[.+\]\:\s TRUNCATE = 99999 TZ = US/Eastern I am trying to pull milliseconds from the log using the 2nd timestamp. <30>Oct 30 11:31:39 172.1.1.1 named[18422]: 30-Oct-2024 11:31:39.731 client 1.1.1.1#1111: view 10: UDP: query: 27b9eb69be0574d621235140cd164f.test.com IN A response: NOERROR +EDV 27b9eb69be0236356140cd164f.test.com. 30 IN CNAME waw-test.net.; waw-mvp.test.net. 10 IN A 41.1.1.1; test.net. 10 IN A 1.1.1.1; test.net. 10 IN A 1.1.1.1; test.net. 10 IN A 1.1.1.1;   I have this loaded on the indexers and search heads.   But it is still pulling from the first timestamp. A btool on the indexers shows this line that I have not configured.: DATETIME_CONFIG = /etc/datetime.xml   Is this what is screwing me up? Thank you!
If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the next level by attending Splunk GovSummit on December 11, 2024. Register today!   Insi... See more...
If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the next level by attending Splunk GovSummit on December 11, 2024. Register today!   Insights, Innovation, and Resilience This one-day event offers public sector tech leaders a platform to exchange ideas and best practices to tackle evolving threats, drive innovation, and improve services for residents. The event will cover the latest trends in observability and cybersecurity, focusing on challenges like safeguarding critical systems, adopting AI responsibly, and meeting compliance standards. Attendees will gain insights from government and industry experts on building digital resilience to achieve mission success. Bonus! Two Hands-On Training Courses Just for You  Splunk Education-experienced trainers will be onsite to teach two 7-hour technical workshops, perfect for anyone in the area who wants to sharpen their skills.    SOC Essentials: Investigating and Threat Hunting   A beginner-to-intermediate course where participants will learn to analyze events and hunt threats like a pro. Time: 9:00 AM - 4:00 PM  Price: $900 or 90 Training Units     Exploring and Analyzing Data with Splunk An intermediate-to-advanced course that dives deep into business insights, data analysis, and custom visualizations.  Time: 9:00 AM - 4:00 PM  Price: $900 or 90 Training Units     Register today! Secure your spot and register today to enhance your skills and make meaningful connections at Splunk GovSummit 2024.       See you in class at Splunk GovSummit, Washington, D.C.    – Callie Skokos on behalf of the Splunk Education Crew
Afternoon, Splunkers! Timechart is really frothing my coffee today. When putting in the parameters for a timechart, it always cuts off the latest time value. For example, if I give it a time window... See more...
Afternoon, Splunkers! Timechart is really frothing my coffee today. When putting in the parameters for a timechart, it always cuts off the latest time value. For example, if I give it a time window of four hours with a span of 1h, I get a total of four data points: 12:00:00 13:00:00 14:00:00 15:00:00 I didn't ask for four data points, I asked for the data points from 12:00 to 16:00. And in this particular example, no, 16:00 isn't a time that hasn't arrived yet or only has partial data; it does this with any time range I pick, at any span setting. Now, I can work around this by programming the dashboard to add 1 second to the <latest> time for the time range. Not that huge of a deal. However, I'm left with a large void on the right-hand side of the time range. Is there anyway I can fix this, either by forcing the timechart to show me the whole range or by hiding the empty range?