All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, Yes, We tried with the node.js agent and it is working fine. We are getting all expected metrics for Nest.js
@dkmcclory The Cisco Secure eStreamer Client Add-On for Splunk (App ID: 3662) has been marked for end-of-life as of July 15, 2024, with limited support available. Users are advised to transition to t... See more...
@dkmcclory The Cisco Secure eStreamer Client Add-On for Splunk (App ID: 3662) has been marked for end-of-life as of July 15, 2024, with limited support available. Users are advised to transition to the Cisco Security Cloud App for Splunk (App ID: 7404), which integrates the eStreamer SDK to provide comprehensive event support, including IDS, Malware, Connection, and IDS Packet data.
@dkmcclory  To improve ingestion performance independently of the app or add-on used, consider optimizing your hardware resources, such as CPU and RAM. Allocating additional CPU cores and memory can... See more...
@dkmcclory  To improve ingestion performance independently of the app or add-on used, consider optimizing your hardware resources, such as CPU and RAM. Allocating additional CPU cores and memory can significantly enhance the handling of high event rates. Ensure a stable, high-speed network connection between the Firepower Management Center (FMC) and the Splunk to avoid data bottlenecks. Use high-performance storage solutions to manage the rapid write operations required for high event volumes. Additionally, filter out unnecessary events in the Heavy forwarder by configuring props.conf and transforms.conf before sending data to indexers.
Hi Ryan, No i havent received solution for the same. I tried as below but receiving error. #set($splitMessage = $eventMessage.split("<br>")) #set($Message = $splitMessage[6]) Health Rule Violatio... See more...
Hi Ryan, No i havent received solution for the same. I tried as below but receiving error. #set($splitMessage = $eventMessage.split("<br>")) #set($Message = $splitMessage[6]) Health Rule Violation: ${latestEvent.healthRule.name} What is impacted: $impacted Summary: ${latestEvent.summaryMessage} Event Time: ${latestEvent.eventTime} This is an automated message from AppDynamics. Subject : Health Rule Violation - ${latestEvent.healthRule.name} Impacted Component: $impacted Message = $Message
That’s an excellent point. Actually you must do it or at least check if this is needed every time when you add a new data source. It not matter is it a new source system for indexing to current index... See more...
That’s an excellent point. Actually you must do it or at least check if this is needed every time when you add a new data source. It not matter is it a new source system for indexing to current indexers or a totally new indexer or cluster to your SH.
There are two sides to this coin. 1) Yes, if you add another cluster or a standalone indexer your search will be distributed there as well (unless you explicitly limit your search) 2) ES will only ... See more...
There are two sides to this coin. 1) Yes, if you add another cluster or a standalone indexer your search will be distributed there as well (unless you explicitly limit your search) 2) ES will only work seamlessly if the data in your new indexers matches the already existing configuration. So if you just connect indexers containing more of the same stuff you already have you should be good to go. But if you've been only processing - for example - network data so far but add indexers containing endpoint events you might need to adjust your ES configurations, datamodel accelerations and so on.  
Hi have same or different lexical form of splunk.com and your local splunk box account name? If those are formally same it’s quite possible that your browser’s password manager thinks that those tw... See more...
Hi have same or different lexical form of splunk.com and your local splunk box account name? If those are formally same it’s quite possible that your browser’s password manager thinks that those two different site passwords should be same and it’s offering to you update those to the same.  As @MuS said there are two different accounts one for splunk.com and also for splunkbase then second one for your local splunk instance. I strongly propose that you are keeping those to have different lexical format. Otherwise you must be really careful to avoid updating wrong password.
I want to create a custom role to manage splunk risky commands. I looked for configuration files related to risky commands and found that it is related to web.conf command.conf file and I found that ... See more...
I want to create a custom role to manage splunk risky commands. I looked for configuration files related to risky commands and found that it is related to web.conf command.conf file and I found that you can disable risky commands by setting [command] is_risky=false in command.conf file. What I want is to give a role to manage risky_command so that I can get different search results than other users. I wonder if it is possible to create such a role.
If you can connect locally with curl everything is basically ok. It means that issue is on network side. Have you any node on the same subnet (no network fw between it and splunk), where you could tr... See more...
If you can connect locally with curl everything is basically ok. It means that issue is on network side. Have you any node on the same subnet (no network fw between it and splunk), where you could try curl to this host? Another test which needs to do is try curl on splunk host, but use the official url not localhost. And if there are LB/VIP address before splunk nodes, then use also that and splunk nodes ip too. In that way we can try to find where the blocking fw. We have several RHEL 9 cis v1 hardened boxes and there is no issues with them. 
You question is a little confusing as the table shows the values of sessionID by field, which is what you say you wanted, but the stats is giving the values of field by sessionID, i.e. the other way ... See more...
You question is a little confusing as the table shows the values of sessionID by field, which is what you say you wanted, but the stats is giving the values of field by sessionID, i.e. the other way round. Are you looking for dc, i.e. | stats dc(sessionID) as uniqueSessionCount by field which would give you the count of different sessionIDs for each value of "field"
I'm trying to count the unique values of a field by the common ID (session ID) but only once (one event). Each sessionID could have multiples of each unique field value. Initially I was getting the ... See more...
I'm trying to count the unique values of a field by the common ID (session ID) but only once (one event). Each sessionID could have multiples of each unique field value. Initially I was getting the count of every event which isn't what I want to count and if I 'dedup' the sessionID then I only get one of the unique field values back.  Is it possible to count one event per session ID for each unique field value?  "stats values("field") by sessionID"  gets me close but in the table it lists the sessionIDs whereas I'm hoping to get the number (count) of unique sessionIDs  Field sessionID value1 ABC123 123ABC value2 ABC123 value3 123ABC value4 ABC123 123ABC AABBCC 12AB3C value5 ABC123 123ABC AABBCC 12AB3C CBA321   Hopefully that makes sense. Thanks  
Hi there, If you are downloading add-ons in your local Splunk Enterprise UI the login that is required is the login from splunk.com and if that still fails try to download it here https://splunkbase... See more...
Hi there, If you are downloading add-ons in your local Splunk Enterprise UI the login that is required is the login from splunk.com and if that still fails try to download it here https://splunkbase.splunk.com/    Hope this helps ... cheers, MuS
Note that if you are just searching 8 days, then it's as easy and probably more efficient to use stats rather than streamstats. Note these are simple examples that you can paste into the search wind... See more...
Note that if you are just searching 8 days, then it's as easy and probably more efficient to use stats rather than streamstats. Note these are simple examples that you can paste into the search window to run | makeresults count=8 | streamstats c | eval _time=now() - ((c - 1) * 86400) | fields - c | eval ApplName=split("ABCDEFGHIJKLMNO","") | mvexpand ApplName | eval ApplName="Application ".ApplName | eval count=random() % 20 | table _time ApplName count ``` Above creates data ``` ``` This is just a simple technique to anchor todays and exclude it from the average ``` | streamstats c by ApplName | sort _time ``` Then this will take the average (assuming 8 days of data) ``` | stats latest(_time) as _time avg(eval(if(c=1, null(), count))) as Avg latest(count) as count by ApplName | eval Variance=count-Avg | sort ApplName - _time | where _time >= relative_time(now(), "@d")  
Here's an example which sets up two weeks of simulated data and then does the calcs you want | makeresults count=14 | streamstats c | eval _time=now() - ((c - 1) * 86400) | fields - c | eval ApplNam... See more...
Here's an example which sets up two weeks of simulated data and then does the calcs you want | makeresults count=14 | streamstats c | eval _time=now() - ((c - 1) * 86400) | fields - c | eval ApplName=split("ABCDEFGHIJKLMNO","") | mvexpand ApplName | eval ApplName="Application ".ApplName | eval count=random() % 20 | table _time ApplName count ``` The above sets up 2 weeks of data for 15 applications ``` ``` Now sort in ascending time and calculate the rolling 7 day average at the end (most recent) it will show the average over the previous 7 days - based on the PRIOR 7 day. (current=f) ``` | sort _time | streamstats window=7 current=f global=f avg(count) as Avg by ApplName ``` Then this just calculates the variance ``` | eval Variance=count-Avg ``` And finally put in order and retain only today's numbers ``` | sort ApplName - _time | where _time >= relative_time(now(), "@d") This should give you some pointers, but if you want to share what you've tried so far, we can help you get there.
We have a data in splunk that is basically   DATE/APPLNAME/COUNT, there are about 15 applications, and we would like to create a table that shows by application, the current days count, the 7 day ave... See more...
We have a data in splunk that is basically   DATE/APPLNAME/COUNT, there are about 15 applications, and we would like to create a table that shows by application, the current days count, the 7 day average, and the variance of today, to the average.  I've tried a number of things with different searches like appendcols, but not getting the results.   I can produce the count or the average, but can't seem to put them together correctly. 
Hi, I am trying to download the Splunk Add-ons into my standalone system. However, it is keep on showing me the incorrect ID and password error. I tried changing the password but the issue is same. I... See more...
Hi, I am trying to download the Splunk Add-ons into my standalone system. However, it is keep on showing me the incorrect ID and password error. I tried changing the password but the issue is same. I am able to login with the same credentials but unable to download anything.
To go slightly tangential to your post, you refer to base searches. Note that a base search that does NOT do aggregation is a bad use of a base search, so if you are just doing index=xxx | fields * ... See more...
To go slightly tangential to your post, you refer to base searches. Note that a base search that does NOT do aggregation is a bad use of a base search, so if you are just doing index=xxx | fields * in your base search and not doing a transforming command, that is not a good example to be showing in an example dashboard. It will often perform worse than one using a transforming command, but also has significant limitations in that it can only hold a limited set of results. See this https://docs.splunk.com/Documentation/Splunk/9.4.0/Viz/Savedsearches#Post-process_searches_2    
You can download archived apps from the old splunkbase site https://classic.splunkbase.splunk.com/app/1603/  
There are generally 2 ways to do this, one can be done in search alone and the other can be done in dashboard. I tend to use the dashboard approach when in a dashboard, which is to use addinfo and to... See more...
There are generally 2 ways to do this, one can be done in search alone and the other can be done in dashboard. I tend to use the dashboard approach when in a dashboard, which is to use addinfo and to calculate the ranges needed for the outer search. The technique is to use a hidden search, either in a table where you have <row depends="$hidden$"> as the row header, or as a base search in the core body of the XML. (NB: In this example I have not hidden the search so you can see what's generated) However, see this example, which calculates 10 periods going back over the last 10 days with the correct matching time period. <form version="1.1" theme="light"> <label>Times</label> <fieldset submitButton="false"> <input type="time" token="thetime" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-5m@m</earliest> <latest>@m</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <done> <set token="pd_min_current">$result.pd_min_current$</set> <set token="pd_max_current">$result.pd_max_current$</set> <set token="pd_min_1">$result.pd_min_1$</set> <set token="pd_max_1">$result.pd_max_1$</set> <set token="pd_min_2">$result.pd_min_2$</set> <set token="pd_max_2">$result.pd_max_2$</set> <set token="pd_min_3">$result.pd_min_3$</set> <set token="pd_max_3">$result.pd_max_3$</set> <set token="pd_min_4">$result.pd_min_4$</set> <set token="pd_max_4">$result.pd_max_4$</set> <set token="pd_min_5">$result.pd_min_5$</set> <set token="pd_max_5">$result.pd_max_5$</set> <set token="pd_min_6">$result.pd_min_6$</set> <set token="pd_max_6">$result.pd_max_6$</set> <set token="pd_min_7">$result.pd_min_7$</set> <set token="pd_max_7">$result.pd_max_7$</set> <set token="pd_min_8">$result.pd_min_8$</set> <set token="pd_max_8">$result.pd_max_8$</set> <set token="pd_min_9">$result.pd_min_9$</set> <set token="pd_max_9">$result.pd_max_9$</set> <set token="pd_min_10">$result.pd_min_10$</set> <set token="pd_max_10">$result.pd_max_10$</set> </done> <query>| makeresults | addinfo | eval pd_min_current=info_min_time, pd_max_current=info_max_time | foreach 1 2 3 4 5 6 7 8 9 10 [ eval pd_min_&lt;&lt;FIELD&gt;&gt;=relative_time(info_min_time, "-"."&lt;&lt;FIELD&gt;&gt;"."d"), pd_max_&lt;&lt;FIELD&gt;&gt;=relative_time(info_max_time, "-"."&lt;&lt;FIELD&gt;&gt;"."d") ] | fields - info_*</query> <earliest>$thetime.earliest$</earliest> <latest>$thetime.latest$</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table> <search> <query>index=_audit (earliest &gt;= $pd_min_current$ AND latest &lt; $pd_max_current$) OR (earliest &gt;= $pd_min_1$ AND latest &lt; $pd_max_1$) OR (earliest &gt;= $pd_min_2$ AND latest &lt; $pd_max_2$) OR (earliest &gt;= $pd_min_3$ AND latest &lt; $pd_max_3$) OR (earliest &gt;= $pd_min_4$ AND latest &lt; $pd_max_4$) OR (earliest &gt;= $pd_min_5$ AND latest &lt; $pd_max_5$) OR (earliest &gt;= $pd_min_6$ AND latest &lt; $pd_max_6$) OR (earliest &gt;= $pd_min_7$ AND latest &lt; $pd_max_7$) OR (earliest &gt;= $pd_min_8$ AND latest &lt; $pd_max_8$) OR (earliest &gt;= $pd_min_9$ AND latest &lt; $pd_max_9$) OR (earliest &gt;= $pd_min_10$ AND latest &lt; $pd_max_10$) | bin _time span=5m aligntime=@m | chart count by _time user</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> See how the <done> part of the hidden search will then set the tokens needed by your actual search. The other technique is to do the same as the hidden search, but in a subsearch so the subsearch will return earliest and latest for each of the periods you want to restrict to) Hope this helps  
Yes I did these commands many times it connects. It was puzzling the Splunk guys and I didn't find anything in the logs and  I sent the Splunk-Diag to the Splunk engineers and they found nothing. I... See more...
Yes I did these commands many times it connects. It was puzzling the Splunk guys and I didn't find anything in the logs and  I sent the Splunk-Diag to the Splunk engineers and they found nothing. I think we need a Red Hat expert who can look at the CIS hardening controls and state its that one.  Splunk Leadership should definitely step in and find a solution if this is a bug with CIS Red Hat 9 " v2 " level 1, with there Splunk product 9.3 and 9.4 application.