All Topics

Top

All Topics

In my SPL JOIN query, I want to get the events for, let's say, between T1 and T2; however, the relevant events on the right side of the query happened between T1-60m and T2. I can't figure out how to... See more...
In my SPL JOIN query, I want to get the events for, let's say, between T1 and T2; however, the relevant events on the right side of the query happened between T1-60m and T2. I can't figure out how to do it in the dashboard or just a report. Using relative_time won't work for some reason. I appreciate any help. index=myindex | fields a, b, c | join type=inner left=l right=r where l.keyid=r.keyid [search index=myindex ```<- how to change the earliest to earliest-60m?``` |fields d, f ] | table l.a, l.b, l.c, r.d, r.f    
Hi, I've been struggling for some time with the way baselines seem to work - to the extent that I'm feeling like I can't trust them to be used to alert us to degraded performance in our systems.  I ... See more...
Hi, I've been struggling for some time with the way baselines seem to work - to the extent that I'm feeling like I can't trust them to be used to alert us to degraded performance in our systems.  I thought I would describe the issue and get the thoughts of the community.  Looking for some thoughts from folks who are happy with baselines and how they are mitigating the issue I’m experiencing.  Or some input confirming that my thinking on this is correct. I have proposed what I think could be a fix towards the end.  Apologies if this ends up being a bit of a long read but it feels to me like this is an important issue – baselines are fundamental to AppD alerting and currently I don’t see how they can reliably be used. To summarise the issue before I go into more detail it looks to me like AppD baselines, and the moving average used for transaction thresholds, ingest bad data when there is performance degradation which renders baselines unfit for their purpose of representing ‘normal’ performance.  This obviously then impacts on any health rules or alerting that make use of these baselines. Let me provide an example which will hopefully make the issue clear. A short time ago we had a network outage which resulted in a Major Incident (MI) and significantly increased average response time (ART) for many of our BTs. Because the ART metric baseline uses these abnormal ART values to generate the ongoing baseline it meant that the baseline itself rapidly increased. The outage should have significantly exceeded multiple SDs above the expected ‘normal’ baseline.  But because the bad data from the outage increased the baseline it meant that other than the very brief spike right at the start the increase in ART barely reached 1SD above baseline. Furthermore, the nature of the Weekly Trend – Last 3 Months baseline means that this ‘bad’ baseline will propagate forward.  Looking at the first screenshot above we can clearly see that the baseline is expecting ‘normal’ ART to be significantly elevated every Tuesday morning now.  Presumably this will continue until the original outage spike moves out of the baseline rolling window in 3 months. This is more clearly shown if we look more closely at the current week so that the chart re-scales without the original ART spike present. As far as the baseline is concerned, a large spike in ART every Tuesday morning is now normal.   This mean that less extreme (but still valid) ART degradation will not trigger any health rules that use this baseline.  In fact, this could also generate spurious alerts on healthy performance if we were using an alert based on < baseline SD as the healthy ART now looks to be massively below ‘normal’ baseline. To my mind this simply can’t be correct behaviour by the baseline.  It clearly no longer represents normal performance which by my understanding is the very purpose of the baselines. The same problem is demonstrated if we use other baselines but I’ll not include my findings here for the sake of this already long post not becoming a saga. This issue of ingesting bad data also impacts the Slow/VerySlow/Stalled thresholds and the Transaction Score chart: As can be seen we had a major network outage which caused an increase in ART for an extended period.  This increase was correctly reflected in the Transaction Score chart for a short period but as the bad data was ingested and increased the value of the moving average used for thresholds we can see that even though the outage continued and ART stayed at abnormal level, the health of the transactions stopped being orange Very Slow and moved through yellow Slow back to green Normal.  And yet the outage was ongoing, the Major Incident was ongoing, the ART had not improved from its abnormally high service impacting value.  These later transactions are most certainly not Normal by a very long way and yet AppD believes them to be normal because the moving average has been polluted by ingesting the outage ART data.  So after a short period of time the moving average used to define a Slow/Very Slow transaction no longer represents normal ART but instead has decided that the elevated ART caused by the outage is the new normal.  I’d like to think that I’m not the only one who thinks this is undesirable. Any alerting based on using slow transaction metrics would stop alerting and would report normal performance even though the outage was ongoing with service still being impacted. Now it’s not my way to raise a problem without at least trying to provide a potential solution and in this case I have two initial thoughts: AppD adds the ability to lock the baseline in much the same way as we lock BTs.  So a BT is allowed to build up a baseline until it looks like it matches ‘normal’ behaviour as closely as we’re likely to get.  At this point the baseline is locked and no further data is added to the baseline.  If a service changes and we believe we have a new normal performance then the baseline can be unlocked to ingest the new metrics and update the baseline to the new normal, at which point it can be locked again. Instead of locking baselines AppD could perhaps implement a system whereby bad data is not ingested into the baseline.  Perhaps something like: any data point which comes in which triggers a health rule (or transaction threshold) is taken as evidence of abnormal performance and is not used to generate the baseline, maybe instead the last known non-triggering data point is used for the baseline.  This would mean that the baseline probably would still increase during an outage (working on the assumption that a service degrades before failing so the points immediately prior to the triggering of an alert might still be elevated above normal).  But this should mean that the baseline change would not be as fast or as catastrophic as the current method of calculating the rolling baseline/moving average. Well, that pretty much wraps it up I think.  If you've made it this far then thanks for your time and I'd really appreciate knowing if other folks are having a similar issue with baselines or have found ways to work around it.
Behind every business-critical application, you’ll find databases. These behind-the-scenes stores power everything from login and checkout to content lookups and “likes,” so issues with slow queries,... See more...
Behind every business-critical application, you’ll find databases. These behind-the-scenes stores power everything from login and checkout to content lookups and “likes,” so issues with slow queries, too many full table scans (or too few index scans), incorrectly configured indices, or resource exhaustion directly impact application reliability and user experience. Thankfully, we can capture key database metrics to expose such issues and ensure optimal performance, efficient troubleshooting, and the overall reliability of our applications.  In this post, we’ll explore monitoring the open-source relational database PostgreSQL. Postgres is widely used in enterprise applications for its scalability, extensibility, and support. It also collects and reports a huge amount of information about internal server activity with its statistics collector. We’ll harness these stats using the OpenTelemetry Collector and first focus on the database and infrastructure itself in Splunk Observability Cloud. Then we’ll see how everything connects to our application performance data. Which metrics matter and why Monitoring database metrics is critical to proactively identifying issues, performance optimizations, and database reliability, but with so many stats coming from the statistics collector, it can be difficult to determine what to focus on. How do we isolate what’s critical to monitor effectively? It can help to focus on operation-critical key metrics like those related to: Query performance (query throughput/latency, locks, query errors, index hit rate) Resource utilization (connections, CPU, memory, disk space, table/index size, disk I/O, cache hits) Database health (replication lag, deadlocks, rollbacks, autovacuum performance) Query Performance Slow, resource-intensive queries or queries with high throughput can decrease the response time of our applications and degrade user experience. To prevent things like slow page load time, we want to focus on metrics related to query time –  total response time, index scans per second, and database latency. These metrics will indicate if our database has the right or wrong indexes, absent indexes, if our tables are fragmented, or have too many locks, etc.    Resource Utilization Exceeding resource thresholds can halt application operations altogether. If total active connections are too high resources might be exhausted, and users might not be able to interact with our application at all. Monitoring resource usage like CPU, memory, and table/index size can keep our databases up and running, while also allowing for accurate capacity planning and optimal user experience.  Database health Things like a high rollback to commit rate can indicate user experience issues, for example, users might be unable to complete product checkout on an e-commerce site. An increase in the number of dead rows can lead to degraded query performance or resource exhaustion with similar effects. Proactively monitoring these metrics helps easily identify inefficiencies, eliminate bottlenecks, reduce database bloat, and ultimately improve user experience.   How to get the metrics So how do we get these metrics from PostgreSQL to the OpenTelemetry Collector? The first step is installing the OpenTelemetry Collector. If you’re working with the Splunk Distribution of the OpenTelemetry Collector, you can follow the guided install docs. I’m using Docker Compose to set up my application, Postgres service, and OpenTelemetry Collector, so here’s how I added the Splunk Distribution of the OTel Collector:  If you already have your OpenTelemetry Collector configuration file ready to edit, you can proceed to add a PostgreSQL receiver to the receivers block so you can start collecting telemetry data from Postgres. Because I set up the Collector with Docker Compose, I manually created my Collector configuration file (otel-collector-config.yaml). Here’s the PostgreSQL receiver I added to my Collector config:  Note: generally, your database and microservices would be behind network and API security layers so your databases and services would talk to each other unencrypted, which is why I have tls set to insecure: true. If your database requires an authenticated connection, you’ll need to supply a certificate similar to what’s shown in the documentation’s sample configuration.  I’m also exporting data for my application to my Splunk Observability Cloud backend, so I’ve added an exporter for that and added both my new receiver and new exporter to my metrics pipeline:  If you’re not using the Splunk Distribution of the OpenTelemetry Collector or not exporting data to Splunk Observability Cloud, configuring the PostgreSQL receiver block will still follow the example shown, but you’ll need to configure a different exporter and add it to the metrics pipeline.  That’s it! Now either build, start, or restart your service (I did a docker compose up --build) and watch your database metrics flow into your backend observability platform of choice.  Note: If you’re working with a complex service architecture and the Splunk Distribution of the OpenTelemetry Collector, you might want to consider using automatic discovery. This allows the Collector to automatically detect and instrument services and their data sources. Depending on your environment and Collector installation method, you can follow the appropriate docs (Linux, Windows, Kubernetes) to deploy the Collector with automatic discovery. How to see the data in Splunk Observability Cloud Now that we’re collecting Postgres data, let’s jump over to Splunk Observability Cloud Infrastructure to visualize our telemetry data. We can select the Datastores section and open up either our PostgreSQL databases for database-level metrics or PostgreSQL hosts for metrics related to the infrastructure hosting your PostgreSQL database(s): Going into the PostgreSQL databases navigator, we can see the metrics related to all of our databases: Here we see those key metrics that can hint at performance issues like total operations, index scans per second, and rollbacks. If total operations are high, we’ll know at a glance if our database resources can handle the current workload intensity. If our index scans per second drop, this can suggest we’re not using indexes efficiently. Databases with a high number of rollbacks could be experiencing an increase in transaction failures or deadlocks. All of these things can lead to slow or unreliable performance for our users. Clicking into our database we see database-specific metrics: We can monitor index size for efficient resource optimation and right-size indexes. Dead row monitoring helps ensure efficient vacuuming to decrease table bloat and increase performance. It looks like we have 18 total operations per second, but 0 index scans per second, which might mean we aren’t indexing and could have some query performance inefficiencies.  Going into the PostgreSQL hosts navigator, we can view things like changes in operations per second, transactions, and disk usage to ensure our system can handle current workloads and maintain consistent performance:    We can also click into a specific host to view individual host metrics like how many transactions succeeded, failed, or were rolled back and the cache hits versus disk hits, both of which impact overall performance:  Moving between Infrastructure, APM, and Log Observer Our database monitoring journey will most likely start at the service level or with the applications they back, so let’s dig into query performance and how to view its impacts on overall application performance.  From within our PostgreSQL host navigator, if we select a specific host, we can view logs or related content in APM to view services that have a dependency on the currently selected host:  We can then jump to the Database Query Performance to view and analyze query time, latency, and errors to see which specific areas are impacting response time and user experience and where we might be able to optimize our query performance:  Closing this out, we can see our Service Map and where the current database sits so that we can investigate specific errors, traces, or related logs: We moved from Infrastructure to Application Performance Monitoring, but we could have just as easily started with our Service Map and began troubleshooting database performance issues from there using Database Query Performance, database requests/errors, or traces.  Wrap Up Monitoring key metrics from the databases that power our applications is critical to the performance and reliability that our users count on. Configuring the OpenTelemetry Collector to receive PostgreSQL telemetry data and export this data to a backend observability platform is an easy process that provides invaluable visibility into the databases that back our services. If you’d like to try exporting your Postgres data to Splunk Observability Cloud, try it free for 14 days!   Resources Automatic Discovery and Instrumentation of PostgreSQL with Splunk OpenTelemetry Collector Database Monitoring: Basics & Introduction OpenTelemetry Collector Configuring Receivers
Good evening everyone, we have a problem in a Splunk cluster, composed of 3 indexers, 1 CM, 1 SH, 1 Deployer, 3 HF, 3 UF. The UFs receive logs from different Fortinet sources via syslog, and write ... See more...
Good evening everyone, we have a problem in a Splunk cluster, composed of 3 indexers, 1 CM, 1 SH, 1 Deployer, 3 HF, 3 UF. The UFs receive logs from different Fortinet sources via syslog, and write them to a specific path via rsyslog. Splunk_TA_fortinet_fortigate is installed on the forwarders. These logs must be saved to a specific index in Splunk, and a copy must be sent to two distinct destinations (third-party devices), in two different formats (customer needs). Since the formats are different (one of the two contains TIMESTAMP and HOSTNAME, the other does not), via rsyslog they are saved to two distinct paths applying two different templates. So far so good. The issues we have encountered are: - Some events are indexed twice in Splunk - Events sent to the customer do not always have a format that complies with the required ones For example, in one of the two cases the required format is the following: <PRI> date=2024-09-12 time=14:15:34 devname="device_name" ... But looking at the sent packets via tcpdump, some are correct, others are in the format <PRI> <IP_address> date=2024-09-12 time=14:15:34 devname="device_name" ... and more in the format <PRI> <timestamp> <IP_address> date=2024-09-12 time=14:15:34 devname="device_name" ... The outputs.conf file is as follow: [tcpout] defaultGroup = default-autolb-group [tcpout-server://indexer_1:9997] [tcpout-server://indexer_2:9997] [tcpout-server://indexer_3:9997] [tcpout:default-autolb-group] server = indexer_1:9997,indexer_2:9997,indexer_3:9997 disabled = false [syslog] [syslog:syslogGroup1] disabled = false server = destination_IP_1:514 type = udp syslogSourceType = fortigate [syslog:syslogGroup2] disabled = false server = destination_IP_2:514 type = udp syslogSourceType = fortigate priority = NO_PRI This is the props.conf: [fgt_log] TRANSFORMS-routing = syslogRouting [fortigate_traffic] TRANSFORMS-routing = syslogRouting [fortigate_event] TRANSFORMS-routing = syslogRouting and this is the trasforms.conf: [syslogRouting] REGEX=. DEST_KEY=_SYSLOG_ROUTING FORMAT=syslogGroup1,syslogGroup2 Any ideas? Thank you, Andrea  
Good day, I often run up against the issue of wanting to drag the text of a field name from the browser into a separate text editor.  Whenever I drag it, it works but it brings all the html metadata ... See more...
Good day, I often run up against the issue of wanting to drag the text of a field name from the browser into a separate text editor.  Whenever I drag it, it works but it brings all the html metadata with it.  Sometimes these field names are very long and so truncated on the screen its very tough without copying and pasting.   Has anyone found good work around for this?  Right now the field names, when dragged from the web browser into a text editor, comes through like this: https://fakebus.splunkcloud.com/en-US/app/search/search?q=search%20%60blocks%60&sid=1726153610.129675&display.page.search.mode=verbose&dispatch.sample_ratio=1&workload_pool=&earliest=-30m%40m&latest=now#https://fakebus.splunkcloud.com/en-US/app/search/search?q=search%20%60blocks%60&sid=1726153610.129675&display.page.search.mode=verbose&dispatch.sample_ratio=1&workload_pool=&earliest=-30m%40m&latest=now# Ironically dragged text field from splunk into this web dialog box work fine.
Hello, I'm not sure how to troubleshoot this at all.  So I've created a new Python based App thru the Add-On builder that is using a Collection Interval every 60 sec.  The App Input is set to 60 se... See more...
Hello, I'm not sure how to troubleshoot this at all.  So I've created a new Python based App thru the Add-On builder that is using a Collection Interval every 60 sec.  The App Input is set to 60 sec as well.  When I test the script which makes chained API calls that creates events based off of the last API call, it returns within 20 sec. The App would create about 50 events for each interval, when performing a Search, I would expect every 1 min to see about 50 events, but I'm seeing 6 or 7 per minute.   I ran the following query, and it's showing that the event time and index time are within ms.   source=netscaler| eval indexed_time=strftime(_indextime, "%Y-%m-%d %H:%M:%S") | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | table _raw event_time indexed_time When looking at the App log, I see it's only making the final API calls every 20 sec instead of all 50 of the final API calls within ms. Does anyone have any idea why this would occur and how I could resolve this lag that is occurring?   Thanks for your help, Tom    
Hey All, Can anybody help me with optimization of this rex: | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0... See more...
Hey All, Can anybody help me with optimization of this rex: | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>.*?),\s*EXCID:\s*(?P<EXCID>[a-zA-Z_]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" Example log: "#HLS# IID: EB_FILE_S, STEP: SEND_TOF, PKEY: Ids:100063604006, 1000653604006, 6000125104001, 6000135104001, 6000145104001, 6000155104001, STATE: IN_PROGRESS, MSG0: Sending request to K, EXCID: dcd, PROPS: EVENT_TYPE: SEND_TO_S, asd: asd #HLE# ERROR: "Streamed search execute failed because: Error in 'rex' command: regex="#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>.*?),\s*EXCID:\s*(?P<EXCID>[a-zA-Z_]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" has exceeded configured match_limit, consider raising the value in limits.conf."
I am using AWS SNS to send notifications, but I am not able to find a way to send all the results that triggered the query. I see $result._raw$ option but it does not contain any data in the notifica... See more...
I am using AWS SNS to send notifications, but I am not able to find a way to send all the results that triggered the query. I see $result._raw$ option but it does not contain any data in the notification.  Can anyone please help to confirm how to send all query results to SNS? Thanks in advance.    
Hello Splunkers, I'm trying to push data to indexers from HF's where I have a syslog-ng receiving the logs. This is from a non-supported device therefore TA is not available on Splunkbase. My concer... See more...
Hello Splunkers, I'm trying to push data to indexers from HF's where I have a syslog-ng receiving the logs. This is from a non-supported device therefore TA is not available on Splunkbase. My concern is when I'm writing inputs.conf can i just create one directory and call it cisco_TA and inside that create a directory called local and place my inputs.conf there ? is that sufficient to create a custom TA and transport the logs. Or should create other direcotires such as default , metadata, licenses ect.. Please if someone can advise on the above.   Thank you,   regards, Moh.
Hi, I have two fields, both these fields will be in two different events, now  i want to search for events, where aggr_id=*session_ID*, basically i'm looking to search for field1=*field2* field1: s... See more...
Hi, I have two fields, both these fields will be in two different events, now  i want to search for events, where aggr_id=*session_ID*, basically i'm looking to search for field1=*field2* field1: session_ID= 1234567890 field2: aggr_id= ldt:1234567890:09821
my free 60 days trial got expired and now I have updated the license to a free trial, but now I'm unable to use search head of Splunk.  each time I get this "Error in 'lit search' command: Your Splun... See more...
my free 60 days trial got expired and now I have updated the license to a free trial, but now I'm unable to use search head of Splunk.  each time I get this "Error in 'lit search' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK."   I have already updated the license and restarted the application. Please help me with this. 
I have an alert which I am trying to throttle based on few fields from the alert on condition if it triggers once then it shouldn't trigger for next 3 days unless it has different results. But the a... See more...
I have an alert which I am trying to throttle based on few fields from the alert on condition if it triggers once then it shouldn't trigger for next 3 days unless it has different results. But the alert is running every 15 mins and I can see same results of the alert every 15 mins. My alert is outputting the results to another index Example: blah blah , ,  , , ,  , , ,  | collect index=testindex sourcetype=testsourcetype Based on my research, I came across a post where it says  Since a pipe command is still part of the search, throttling would have no effect o because the search hasn't completed yet and can't be throttled. I think this because the front end says After an alert is triggered, subsequent alerts will not be triggered until after the throttle period, but that doesn't say "they aren't run" Is it the case? If so how can I stop updating the duplicate values in my index
I am in the middle of a Splunk migration. One of the tasks is to moved data from some sourcetypes onto the new servers using the | collect index=aws sourcetype=* command. The numbers added up after... See more...
I am in the middle of a Splunk migration. One of the tasks is to moved data from some sourcetypes onto the new servers using the | collect index=aws sourcetype=* command. The numbers added up after running checks. I run the same checks again a day later and the numbers no longer match up. Source 1 -> Old Splunk New Splunk Source 2 -> Old Splunk New Splunk August 12,478,853 12,478,853   26,171,911 26,171,911   24 hours later Source 1 -> Old Splunk New Splunk Source 2 -> Old Splunk New Splunk   12,478,853 12,477,696   26,171,911 3,001,183   I've set the following stanza within the indexes.conf file on the deployment server. Also the index only contains 22gb of data. Can you help? [aws] coldPath = $SPLUNK_DB\$_index_name\colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB\$_index_name\db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB\$_index_name\thaweddb frozenTimePeriodInSecs=94608000
Does anyone have an example of a coldtofrozenscript to be deployed in a clustered enviorment, I'm weary of having duplicate buckets etc?
  index=test | table severity location vehicle severity  location vehicle high Pluto Bike   testLookup.csv severity location vehicle high octane Pluto ... See more...
  index=test | table severity location vehicle severity  location vehicle high Pluto Bike   testLookup.csv severity location vehicle high octane Pluto is one of the planet Bike has 2 wheels   As you can see on my table i have events. Is there a way to compare my table events to my testLookup.csv field values without using lookup command or join command ? Example. if my table events severity value have matched or has word same as "high" inside the severity in lookup field severity value then it is true otherwise false. Thank you.
Hello,  we have decided to retire SPLUNK and the server that SPLUNK was running on. If the server is decommissioned, do we still need to decommission SPLUNK - or would one equal the other? If it wou... See more...
Hello,  we have decided to retire SPLUNK and the server that SPLUNK was running on. If the server is decommissioned, do we still need to decommission SPLUNK - or would one equal the other? If it wouldn't, is there a way to still decommission SPLUNK after the server has been decommissioned? Thank you.   
Good day guys, Need to know how SVCs are actually getting calculated? With examples please! I have already gone thru splunk docs n yt vids but still wanted to know how SVCs figure are getting concl... See more...
Good day guys, Need to know how SVCs are actually getting calculated? With examples please! I have already gone thru splunk docs n yt vids but still wanted to know how SVCs figure are getting concluded? Kindly suggest Thanks in advance
Hi all,  I am trying to show the connected duration, which is calculated using transaction command in a timechart. When I try below query, the entire duration shows in the earliest timestamp(start... See more...
Hi all,  I am trying to show the connected duration, which is calculated using transaction command in a timechart. When I try below query, the entire duration shows in the earliest timestamp(start time) as a single column. I would like to show the connected duration in a column chart, with area between start and end time colored.  For example, if device was connected from 20th August to 23rd August, I want the column to extend across these days. Currently, the entire duration is shown on the 20th date alone. Kindly let me know your suggestions to implement this. Query: | transaction dvc_id startswith="CONNECTED" endswith="DISCONNECTED" | timechart sum(duration) by connection_protocol
Hello, I am running Splunk Enterprise 9.2.2. I am trying to install Python for Scientific Computing for Windows as I am running it on a Windows Server.  Python for Scientific Computing (for Windo... See more...
Hello, I am running Splunk Enterprise 9.2.2. I am trying to install Python for Scientific Computing for Windows as I am running it on a Windows Server.  Python for Scientific Computing (for Windows 64-bit) | Splunkbase However, I am getting the following errors when I try installing the application. I tried with the tgz file, and also with the extracted tar file but both has the same issue. It looks like the webpage at https://localhost:8000/en-US/manager/appinstall/_upload might be having issues, or it may have moved permanently to a new web address. ERR_CONNECTION_ABORTED Is it due to the file size being overly huge? And what could be the solution? Thanks
I have a Splunk cloud instance that receives log from Linux server that has a Splunk Heavy Forwarder on it. I am trying to update the Forwarder to 9.3.x, but found online I should step to 9.2.x firs... See more...
I have a Splunk cloud instance that receives log from Linux server that has a Splunk Heavy Forwarder on it. I am trying to update the Forwarder to 9.3.x, but found online I should step to 9.2.x first. It appears on the server that it's updated, and running the Splunk 9.2.0 as expected. I am also seeing metric.log files being shown on my cloud instance. But none of the other logs I have pushing from this server are showing up. When I check the Splunk app CMC, it appears that the update has taken and is now showing in compliance. I am not sure what I am doing wrong, or what logs you might need to help further figure out where the issue is. I only have about 6 months of Splunk experience so forgive me if this is a silly question.