All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Timeouts are a client-side concept - the "client" has given up waiting for a response. This may be due to a number of reasons. Often with web applications, the webserver is waiting for resources e.g.... See more...
Timeouts are a client-side concept - the "client" has given up waiting for a response. This may be due to a number of reasons. Often with web applications, the webserver is waiting for resources e.g. a thread, in order to execute the request and when one is not available "in time", a "timeout" is reported. Check whether your timeouts are clustered around particular URLs, or particular times of day, and investigate what is going on on your webservers for those URLs or times of day.
How to find the bucket status in the CLI  As I am using the below query :- ./splunk search "| dbinspect index=_internal span=1d"
We are currently monitoring application URLs using the "Website Monitoring" add-on. However, many URLs are returning null values for the response code, indicated as (response_code="" total_time="" re... See more...
We are currently monitoring application URLs using the "Website Monitoring" add-on. However, many URLs are returning null values for the response code, indicated as (response_code="" total_time="" request_time="" timed_out=True). This results in "timed_out=True" errors, making it impossible to monitor critical URLs and applications in the production environment. An urgent assistance is require to resolve this issue. Prompt support would be highly appreciated and invaluable.
Hi @Karthikeya , my hint is to follow the Splunk search tutorial ( https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial ), so you'll be able to create y... See more...
Hi @Karthikeya , my hint is to follow the Splunk search tutorial ( https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial ), so you'll be able to create your own searches. then, if you like classical dashboard interface, you can use the Splunk Dashboard Examples app ( https://splunkbase.splunk.com/app/1603 ) even if it's archived, if instead you like Dashboard Studio interface, there are many examples to use, but anyway, you have to start from the Search! Ciao. Giuseppe
I am pretty new to Splunk. I have requirement to create dashboard panel which relates our JSESSIONIDs and severity like for specific jsessionID how many critical or error logs present. Tried using s... See more...
I am pretty new to Splunk. I have requirement to create dashboard panel which relates our JSESSIONIDs and severity like for specific jsessionID how many critical or error logs present. Tried using stats and chart not getting desired result may be due to less idea in Splunk.  Need to present in pictorial way. Please suggest the Splunk query and what type of visualization will fit for this requirement?
Hi @rahulkumar , host is one of the mandatory metadata in splunk and must have this name, even if (if you like) you can have also some aliases, but it isn't a best practice. _raw if the name of the... See more...
Hi @rahulkumar , host is one of the mandatory metadata in splunk and must have this name, even if (if you like) you can have also some aliases, but it isn't a best practice. _raw if the name of the full raw events and you must use it. it's the same thing of the timestamp: it must be called _time (probably it's another field to extract from the json!). Then you can extract other fields (if relevant for you) from the json, before the last transformation that removes all the fields but message, in other words, you have to extract all the fields you need and at least restore the message field as raw event (putting message field in the _raw field). Ciao. Giuseppe Ciao. Giuseppe
Hi @onthakur , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi Team, Yes, We tried with the node.js agent and it is working fine. We are getting all expected metrics for Nest.js
@dkmcclory The Cisco Secure eStreamer Client Add-On for Splunk (App ID: 3662) has been marked for end-of-life as of July 15, 2024, with limited support available. Users are advised to transition to t... See more...
@dkmcclory The Cisco Secure eStreamer Client Add-On for Splunk (App ID: 3662) has been marked for end-of-life as of July 15, 2024, with limited support available. Users are advised to transition to the Cisco Security Cloud App for Splunk (App ID: 7404), which integrates the eStreamer SDK to provide comprehensive event support, including IDS, Malware, Connection, and IDS Packet data.
@dkmcclory  To improve ingestion performance independently of the app or add-on used, consider optimizing your hardware resources, such as CPU and RAM. Allocating additional CPU cores and memory can... See more...
@dkmcclory  To improve ingestion performance independently of the app or add-on used, consider optimizing your hardware resources, such as CPU and RAM. Allocating additional CPU cores and memory can significantly enhance the handling of high event rates. Ensure a stable, high-speed network connection between the Firepower Management Center (FMC) and the Splunk to avoid data bottlenecks. Use high-performance storage solutions to manage the rapid write operations required for high event volumes. Additionally, filter out unnecessary events in the Heavy forwarder by configuring props.conf and transforms.conf before sending data to indexers.
Hi Ryan, No i havent received solution for the same. I tried as below but receiving error. #set($splitMessage = $eventMessage.split("<br>")) #set($Message = $splitMessage[6]) Health Rule Violatio... See more...
Hi Ryan, No i havent received solution for the same. I tried as below but receiving error. #set($splitMessage = $eventMessage.split("<br>")) #set($Message = $splitMessage[6]) Health Rule Violation: ${latestEvent.healthRule.name} What is impacted: $impacted Summary: ${latestEvent.summaryMessage} Event Time: ${latestEvent.eventTime} This is an automated message from AppDynamics. Subject : Health Rule Violation - ${latestEvent.healthRule.name} Impacted Component: $impacted Message = $Message
That’s an excellent point. Actually you must do it or at least check if this is needed every time when you add a new data source. It not matter is it a new source system for indexing to current index... See more...
That’s an excellent point. Actually you must do it or at least check if this is needed every time when you add a new data source. It not matter is it a new source system for indexing to current indexers or a totally new indexer or cluster to your SH.
There are two sides to this coin. 1) Yes, if you add another cluster or a standalone indexer your search will be distributed there as well (unless you explicitly limit your search) 2) ES will only ... See more...
There are two sides to this coin. 1) Yes, if you add another cluster or a standalone indexer your search will be distributed there as well (unless you explicitly limit your search) 2) ES will only work seamlessly if the data in your new indexers matches the already existing configuration. So if you just connect indexers containing more of the same stuff you already have you should be good to go. But if you've been only processing - for example - network data so far but add indexers containing endpoint events you might need to adjust your ES configurations, datamodel accelerations and so on.  
Hi have same or different lexical form of splunk.com and your local splunk box account name? If those are formally same it’s quite possible that your browser’s password manager thinks that those tw... See more...
Hi have same or different lexical form of splunk.com and your local splunk box account name? If those are formally same it’s quite possible that your browser’s password manager thinks that those two different site passwords should be same and it’s offering to you update those to the same.  As @MuS said there are two different accounts one for splunk.com and also for splunkbase then second one for your local splunk instance. I strongly propose that you are keeping those to have different lexical format. Otherwise you must be really careful to avoid updating wrong password.
I want to create a custom role to manage splunk risky commands. I looked for configuration files related to risky commands and found that it is related to web.conf command.conf file and I found that ... See more...
I want to create a custom role to manage splunk risky commands. I looked for configuration files related to risky commands and found that it is related to web.conf command.conf file and I found that you can disable risky commands by setting [command] is_risky=false in command.conf file. What I want is to give a role to manage risky_command so that I can get different search results than other users. I wonder if it is possible to create such a role.
If you can connect locally with curl everything is basically ok. It means that issue is on network side. Have you any node on the same subnet (no network fw between it and splunk), where you could tr... See more...
If you can connect locally with curl everything is basically ok. It means that issue is on network side. Have you any node on the same subnet (no network fw between it and splunk), where you could try curl to this host? Another test which needs to do is try curl on splunk host, but use the official url not localhost. And if there are LB/VIP address before splunk nodes, then use also that and splunk nodes ip too. In that way we can try to find where the blocking fw. We have several RHEL 9 cis v1 hardened boxes and there is no issues with them. 
You question is a little confusing as the table shows the values of sessionID by field, which is what you say you wanted, but the stats is giving the values of field by sessionID, i.e. the other way ... See more...
You question is a little confusing as the table shows the values of sessionID by field, which is what you say you wanted, but the stats is giving the values of field by sessionID, i.e. the other way round. Are you looking for dc, i.e. | stats dc(sessionID) as uniqueSessionCount by field which would give you the count of different sessionIDs for each value of "field"
I'm trying to count the unique values of a field by the common ID (session ID) but only once (one event). Each sessionID could have multiples of each unique field value. Initially I was getting the ... See more...
I'm trying to count the unique values of a field by the common ID (session ID) but only once (one event). Each sessionID could have multiples of each unique field value. Initially I was getting the count of every event which isn't what I want to count and if I 'dedup' the sessionID then I only get one of the unique field values back.  Is it possible to count one event per session ID for each unique field value?  "stats values("field") by sessionID"  gets me close but in the table it lists the sessionIDs whereas I'm hoping to get the number (count) of unique sessionIDs  Field sessionID value1 ABC123 123ABC value2 ABC123 value3 123ABC value4 ABC123 123ABC AABBCC 12AB3C value5 ABC123 123ABC AABBCC 12AB3C CBA321   Hopefully that makes sense. Thanks  
Hi there, If you are downloading add-ons in your local Splunk Enterprise UI the login that is required is the login from splunk.com and if that still fails try to download it here https://splunkbase... See more...
Hi there, If you are downloading add-ons in your local Splunk Enterprise UI the login that is required is the login from splunk.com and if that still fails try to download it here https://splunkbase.splunk.com/    Hope this helps ... cheers, MuS
Note that if you are just searching 8 days, then it's as easy and probably more efficient to use stats rather than streamstats. Note these are simple examples that you can paste into the search wind... See more...
Note that if you are just searching 8 days, then it's as easy and probably more efficient to use stats rather than streamstats. Note these are simple examples that you can paste into the search window to run | makeresults count=8 | streamstats c | eval _time=now() - ((c - 1) * 86400) | fields - c | eval ApplName=split("ABCDEFGHIJKLMNO","") | mvexpand ApplName | eval ApplName="Application ".ApplName | eval count=random() % 20 | table _time ApplName count ``` Above creates data ``` ``` This is just a simple technique to anchor todays and exclude it from the average ``` | streamstats c by ApplName | sort _time ``` Then this will take the average (assuming 8 days of data) ``` | stats latest(_time) as _time avg(eval(if(c=1, null(), count))) as Avg latest(count) as count by ApplName | eval Variance=count-Avg | sort ApplName - _time | where _time >= relative_time(now(), "@d")