All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This documentation is confusing. First it says about automating the rolling upgrade of SHC, then it lists upgrading standalone SH as a use case. To be honest, I'd avoid using this app. If not for an... See more...
This documentation is confusing. First it says about automating the rolling upgrade of SHC, then it lists upgrading standalone SH as a use case. To be honest, I'd avoid using this app. If not for any other reason, it supports only using tgz archive to replace the running splunk instance whereas I tend to use system packages (rpm or deb) whenever possible.
Actually, this is not entirely true. 1. With HEC the "basic" form of... well, it's not even authentication as such since the token is not very secret and is often used by many different sources, is ... See more...
Actually, this is not entirely true. 1. With HEC the "basic" form of... well, it's not even authentication as such since the token is not very secret and is often used by many different sources, is a HEC token. 2. Additionally, you can limit source IPs from which the HTTP input accepts connections (the acceptFrom setting) 3. If you have TLS enabled you can use requireClientCert option to require the client to present a valid cert. By default this option is disabled so HEC can accept TLS connection from anyone (possibly with exception for clients not meeting the defined sslVersions or cipher suites). 4. Additionally to 3. you can limit accepted clients to only those presenting clients containing either sslCommonNameToCheck values or sslAltNameToCheck. It's a relatively rarely used option since as @isoutamo said - typically HEC is a rather "open" input but the options are there. I don't remember for certain but I think the 3. and 4. parameters can only be defined on a HEC input as a whole, not on a per-token basis.
You are doing isnull(Item.Subject) Since you are not enclosing the Item.Subject part in quotes (in this case - you should use single quotes) Splunk treats Item and Subject as separate field names a... See more...
You are doing isnull(Item.Subject) Since you are not enclosing the Item.Subject part in quotes (in this case - you should use single quotes) Splunk treats Item and Subject as separate field names and tries to concatenate (the dot operator) their values. Since you have no fields called neither Item nor Subject in your data, the result of joining two null values is of course null as well. You should do isnull('Item.Subject') to get a correct result. Spath is not needed and since Splunk has already done automatic json extraction, it's a needless performance hit.
Hi Isoutamo, Both the HWF's using the same IAM role and go to the same S3 Bucket. We've initially create both the gen s3 inputs via config files when setting up the HWF via ansible, but have also re... See more...
Hi Isoutamo, Both the HWF's using the same IAM role and go to the same S3 Bucket. We've initially create both the gen s3 inputs via config files when setting up the HWF via ansible, but have also recreated the inputs manually via the GUI to see if that made any difference. When looking at it from AWS we have noticed that the HWF's go to different API endpoint when using ListObjects, as the HWF using the old TA (which is the working HWF) uses ListObjects but the HWF with the updated TA (which isnt working) uses ListObjectsV2. Thanks Meaf
Normally timestamp extraction (given it's properly configured) works pretty well and Splunk doesn't have to resort to the fallback action of assigning the current timestamp to the event. But the dat... See more...
Normally timestamp extraction (given it's properly configured) works pretty well and Splunk doesn't have to resort to the fallback action of assigning the current timestamp to the event. But the data ingestion must be properly configured - the sourcetype must have correct settings for the given time format and the sourcetype must be defined at the right place (on the correct component). In your case we have no idea what the ingestion process looks like, how many components are involved, where the settings are defined, what sourcetypes you are using and so on. Furthermore the fact that <777> 2025-01-03T06:12:19.236514-08:00 hello world event which looks like "almost normal" syslog message (the <777> is definitely not a correct facility/priority combination) is getting transformed into Jan 28 14:27:25 127.0.0.1 2025-01-03T06:12:19.236514-08:00 hello world suggests that there is some intermediate step (the 127.0.0.1 part is not a part of the original message).
I'm planning to upgrade a multi-site IDX & SHC environment to version 9.3 and i have question regarding the automated rolling upgrade feature  https://docs.splunk.com/Documentation/Splunk/9.3.0/Dist... See more...
I'm planning to upgrade a multi-site IDX & SHC environment to version 9.3 and i have question regarding the automated rolling upgrade feature  https://docs.splunk.com/Documentation/Splunk/9.3.0/DistSearch/AutomatedSHCrollingupgrade With the "Automated rolling upgrade of a search head cluster" feature there is the option to execute this on: For cluster upgrade, you can run these operations on any cluster member. For deployer upgrade, you must run these operation on the deployer. For non-clustered upgrade, which means upgrading search heads that are not a part of a search head cluster, you must run these operations on each single search head. Is it also possible to use this feature to upgrade CM, LM, MC, DS & HFW as part of non-clustered upgrade? There is an option to execute on Deployer and License Manager,  so i assume i can also use it on the other (stand-alone) management nodes. Any help would be much appreciated    
More words please. What are you trying to achieve? If you just want to list all buckets for a given index you can do something like this: splunk search "| dbinspect index=_internal earliest=1 | ta... See more...
More words please. What are you trying to achieve? If you just want to list all buckets for a given index you can do something like this: splunk search "| dbinspect index=_internal earliest=1 | table bucketId state"
As for the timeout issue - it is most probably a network-related issue. A client tried to establish a connection and request some resource but didn't manage to do it in an allocated time window so re... See more...
As for the timeout issue - it is most probably a network-related issue. A client tried to establish a connection and request some resource but didn't manage to do it in an allocated time window so returned a timeout error. This needs verifying on your end - whether the sites which are to be monitored are reachable, what connection parameters you use (or what are the default parameters if the addon doesn't let you specify them) and so on. As for "urgent assistance is required" - this is a volunteer-based community. People use their spare time to help others. For "urgent assistance" you typically go and pay. Splunk Answers is not a substitution for support or professional services.
Hi @Karthikeya  Will you describe in  more details like what exactly you are looking for .Will you just give sample data so that will help you with the query.
Application obsolescence is one thing. Performance is another (yes, there can be differences in performance between those apps but still as one is getting obsoleted it might soon be difficult to find... See more...
Application obsolescence is one thing. Performance is another (yes, there can be differences in performance between those apps but still as one is getting obsoleted it might soon be difficult to find support for it). Unless you have just one big source (in which case it might indeed be difficult to scale up your environment to meet the demands for performance) you might want to consider splitting your eStreamer sources between different HFs so that you don't have a single component overwhelmed with inputs.
Try something along these lines (I have used _internal/splunkd but you can easily modify it to your requirements) index=_internal sourcetype=splunkd earliest=@d-7d latest=now | bin span=1d _time | s... See more...
Try something along these lines (I have used _internal/splunkd but you can easily modify it to your requirements) index=_internal sourcetype=splunkd earliest=@d-7d latest=now | bin span=1d _time | stats count by _time component | eventstats sum(count) as total by component | where _time = relative_time(now(), "@d") | eval 7day_average=(total - count) / 7
It is the same answer as @richgalloway already gave - check with your stakeholders as to what they want. There is little point building a dashboard that nobody is going to use! Start small with just ... See more...
It is the same answer as @richgalloway already gave - check with your stakeholders as to what they want. There is little point building a dashboard that nobody is going to use! Start small with just one or two panels and see if they find it useful and ask them how it might be changed and what else they might want to see.
| stats dc(field) by sessionID
This is a very open question with many answers, but without a clearer understanding of what you want to get out of your dashboard, it is not easy to say. You could use any of the visualisations avail... See more...
This is a very open question with many answers, but without a clearer understanding of what you want to get out of your dashboard, it is not easy to say. You could use any of the visualisations available in the dashboards, some would be more effective than other depending on the information you are trying to convey. Perhaps you should start small with a statistics table and present that to your stakeholders and ask them what else they would like to see?
Timeouts are a client-side concept - the "client" has given up waiting for a response. This may be due to a number of reasons. Often with web applications, the webserver is waiting for resources e.g.... See more...
Timeouts are a client-side concept - the "client" has given up waiting for a response. This may be due to a number of reasons. Often with web applications, the webserver is waiting for resources e.g. a thread, in order to execute the request and when one is not available "in time", a "timeout" is reported. Check whether your timeouts are clustered around particular URLs, or particular times of day, and investigate what is going on on your webservers for those URLs or times of day.
How to find the bucket status in the CLI  As I am using the below query :- ./splunk search "| dbinspect index=_internal span=1d"
We are currently monitoring application URLs using the "Website Monitoring" add-on. However, many URLs are returning null values for the response code, indicated as (response_code="" total_time="" re... See more...
We are currently monitoring application URLs using the "Website Monitoring" add-on. However, many URLs are returning null values for the response code, indicated as (response_code="" total_time="" request_time="" timed_out=True). This results in "timed_out=True" errors, making it impossible to monitor critical URLs and applications in the production environment. An urgent assistance is require to resolve this issue. Prompt support would be highly appreciated and invaluable.
Hi @Karthikeya , my hint is to follow the Splunk search tutorial ( https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial ), so you'll be able to create y... See more...
Hi @Karthikeya , my hint is to follow the Splunk search tutorial ( https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial ), so you'll be able to create your own searches. then, if you like classical dashboard interface, you can use the Splunk Dashboard Examples app ( https://splunkbase.splunk.com/app/1603 ) even if it's archived, if instead you like Dashboard Studio interface, there are many examples to use, but anyway, you have to start from the Search! Ciao. Giuseppe
I am pretty new to Splunk. I have requirement to create dashboard panel which relates our JSESSIONIDs and severity like for specific jsessionID how many critical or error logs present. Tried using s... See more...
I am pretty new to Splunk. I have requirement to create dashboard panel which relates our JSESSIONIDs and severity like for specific jsessionID how many critical or error logs present. Tried using stats and chart not getting desired result may be due to less idea in Splunk.  Need to present in pictorial way. Please suggest the Splunk query and what type of visualization will fit for this requirement?
Hi @rahulkumar , host is one of the mandatory metadata in splunk and must have this name, even if (if you like) you can have also some aliases, but it isn't a best practice. _raw if the name of the... See more...
Hi @rahulkumar , host is one of the mandatory metadata in splunk and must have this name, even if (if you like) you can have also some aliases, but it isn't a best practice. _raw if the name of the full raw events and you must use it. it's the same thing of the timestamp: it must be called _time (probably it's another field to extract from the json!). Then you can extract other fields (if relevant for you) from the json, before the last transformation that removes all the fields but message, in other words, you have to extract all the fields you need and at least restore the message field as raw event (putting message field in the _raw field). Ciao. Giuseppe Ciao. Giuseppe