All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Agreed, if it is UI issue, then go with Splunk support case.
Son of a...lol I'll give that a shot and get back with you. Thanks for the replies!
Again - close, but no banana But seriously, you need to filter on input on those UFs. Not on output. So you must add those settings to inputs.conf on UFs. Your UFs are not outputting events to E... See more...
Again - close, but no banana But seriously, you need to filter on input on those UFs. Not on output. So you must add those settings to inputs.conf on UFs. Your UFs are not outputting events to EventLog
As others already stated - it's a bit vague requirement. "Gather all data and present it in a readable format" at first glance reads for me as "print all raw events Splunk is receiving" which is kin... See more...
As others already stated - it's a bit vague requirement. "Gather all data and present it in a readable format" at first glance reads for me as "print all raw events Splunk is receiving" which is kinda impossible for a human to read and a bit pointless too. If you want to get some aggregate to gather insight what _kinds_ of data and _where from_ Splunk is getting data you'll have to be a bit creative since - as you already noticed, if you simply do an overall tstats with split by source, sourcetype and host, you'll get a load of results but they will also make not much sense. You need to do some "half-manual" filtering like aggregating file sources by path or even overall by sourcetype. How much of it you have to do will vary depending on your actual data. In some cases you can simply do some tweaking with SPL, maybe matching some sources to regex, maybe just adding all sources or all hosts by sourcetype... In smaller cases you might just get away with exporting results to CSV and a bit of manual tweaking in Excel to get the reasonable results.
Right on. I added the blacklist to my machine's UF outputs.conf last night as there wasn't an inputs.conf. I checked this morning and that event is still coming through. This is what the outputs.conf... See more...
Right on. I added the blacklist to my machine's UF outputs.conf last night as there wasn't an inputs.conf. I checked this morning and that event is still coming through. This is what the outputs.conf looks like.   [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = "ip of splunk":9997 [WinEventLog://Security] disabled=0 current_only=1 blacklist = 5447  
Hi @Cyber_Shinigami , in Splunk, the main question is: what do you want to display? do you want a list of sourcetypes or a list of hosts? I suppose, but it's only an idea of mine, that an executiv... See more...
Hi @Cyber_Shinigami , in Splunk, the main question is: what do you want to display? do you want a list of sourcetypes or a list of hosts? I suppose, but it's only an idea of mine, that an executive, is mainly interested to the kind of main data indexed, so I'd display some grouped informations like the number of different hosts:   | tstats dc(host) AS host_count count where index=* by sourcetype | sort -count | head 10   You could also eventually add a lookup that translates the sourcetypes in more comprehensible description: e.g. cp_log -> "CheckPoint Logs" or fgt_logs -> "Fortinet Logs". Ciao. Giuseppe
Hi @Avantika , where did you locate these conf files? they must be located in the first full Splunk instance the data pass through, in other words in the first Heavy Forwarder or, if not present, i... See more...
Hi @Avantika , where did you locate these conf files? they must be located in the first full Splunk instance the data pass through, in other words in the first Heavy Forwarder or, if not present, in the Indexers. Ciao. Giuseppe
I totally understand where you are coming from and what you are saying.  Alas, I think at this point in time management is attempting to understand what Splunk is collecting so that we can better un... See more...
I totally understand where you are coming from and what you are saying.  Alas, I think at this point in time management is attempting to understand what Splunk is collecting so that we can better understand what Splunk might be potentially missing (such as, when someone stands up a server and doesn't tell someone). I have broken metrics down by time in a more readable format like (last 30 minutes or 24 hours) to test the SPL queries that I've been attempting.  That is why I have been focused on organizing the data by Host, Sourcetype, Source, and Index so that I could capture everything but understand the resource intensity associated with it. Additionally, I created a dashboard studio that showcases each data point listed above in their own tab, still showcases everything but isn't in one instance or table. 
Whoops, apologies, posted on the wrong board.  If a mod could help me move it to a more appropriate board that'd be excellent.
What does currentDBsizeMB actually represent?   Seeing some discrepancies in the actual file system consumption between our indexers and cold storage volumes (which are nvme over IP mounts on differ... See more...
What does currentDBsizeMB actually represent?   Seeing some discrepancies in the actual file system consumption between our indexers and cold storage volumes (which are nvme over IP mounts on different servers) Does currentDBsizeMB include just hot? hot/warm? or hot/warm/cold?  Does it include replica copies or just one and you have to multiply the value by your replication factor to get the "true" size of the index on disk? I have been unable to find a definitive answer on this, appreciate anyone in advance that can help shed some light on this field.  
I question the requirement on a few levels.  First, "gather all data" is a huge task.  Presumably, your Splunk environment has ingested multiple terabytes of data over time.  Gathering it all is imp... See more...
I question the requirement on a few levels.  First, "gather all data" is a huge task.  Presumably, your Splunk environment has ingested multiple terabytes of data over time.  Gathering it all is impractical. Second, "visually readable format".  It's not only somewhat redundant, but also very vague.  How should the data be presented?  A text dump of every event ever received by Splunk would comply with the requirement, but probably would not be well received by executives. Third, this sounds like a typical management directive where those asking don't know what they want. Push back and ask for more information.  What problem are they trying to solve?  Do executives really care about (or even understand) indexes and sourcetypes?  They probably don't and are more interested in high-level metrics like storage cost trends or number of incidents detected.
Hello Splunk Community,  I am very new to Splunk and was given the following task and could really use some help: To gather all data that Splunk is collecting and put it in a visually readable form... See more...
Hello Splunk Community,  I am very new to Splunk and was given the following task and could really use some help: To gather all data that Splunk is collecting and put it in a visually readable format for executives I have been trying very many things to accomplish this, such as, using Enterprise Security > Audit> Index Audit and Forwarder Audit. Trying to create custom classic dashboards and using Dashboard studio to play around with the data. Nothing seems to give me what I need.  I have also tried the following:  | tstats  values(source) as sources ,values(sourcetype) as sourcetype where index=* by host | lookup dnslookup clienthost as host OUTPUT  clientip as src_ip This method is very resource intensive and provides me with the information I need but the Source and Sourcetypes are incredibly long and make the table not easy to read for executives. Is there another way to do this? 
Try counting the number of indexes for each EventId. index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count, dc(index) as indexCount by EventId | ... See more...
Try counting the number of indexes for each EventId. index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count, dc(index) as indexCount by EventId | search count < 2 OR indexCount=1 Also, the append command is inefficient and not necessary in this case.  Try this index=index1 OR (index=index2 sourcetype="api") | rename Number__c as EventId | stats count, dc(index) as indexCount by EventId | search count < 2 OR indexCount=1  
Hello Splunkers, I was wondering if it's possible to combine adaptive and static thresholds in IT Service Intelligence for the same KPI. As an example, let's consider the percentage of swap memory ... See more...
Hello Splunkers, I was wondering if it's possible to combine adaptive and static thresholds in IT Service Intelligence for the same KPI. As an example, let's consider the percentage of swap memory used by a host. If I apply static thresholds, I know there's an issue only when the last detected value exceeds a fixed number (we can call this "the old style monitoring"  ). On the other hand, if I use ITSI adaptive thresholding, the boundary will adapt itself using historical data. This solution would be great, but let's imagine that the swap memory used by the system slowly but continuously grows over days and weeks. At a certain point, it will reach 100%, but the KPI state will say "normal" because that value is, in some way, aligned with previous ones. Is there a way to use the adaptive thresholding behavior while keeping the "emergency" values fixed? Thanks in advance. Happy Splunking!
After the upgrade of Splunk core to release 9.4.0,  if I want to bind LDAP group name to role inside splunk (I have about 200 role), splunk show me only 30 role. I tried to bypass this bug/issue set... See more...
After the upgrade of Splunk core to release 9.4.0,  if I want to bind LDAP group name to role inside splunk (I have about 200 role), splunk show me only 30 role. I tried to bypass this bug/issue setting it via conf file and then restarting the splunk service but this is boring. Do you encountered this issue? How to resolve that?   NOTE: Environment Search Head Cluster Splunk Enterprise rel. 9.4.0    
Hi @navan1 , only one question: do you want to search in a defined field or in all the events raw? if in one field (user) that's the same both in main search and lookup, please try this: index="my... See more...
Hi @navan1 , only one question: do you want to search in a defined field or in all the events raw? if in one field (user) that's the same both in main search and lookup, please try this: index="my_index" sourcetype="my_sourcetype" [ | inputlookup users_list.csv | fields user ] | table app action signinDateTime user shortname  if you want to perform a full text search of the lookup user values in the main search, you can try: index="my_index" sourcetype="my_sourcetype" [ | inputlookup users_list.csv | rename user AS query | fields query ] | table app action signinDateTime user shortname Ciao. Giuseppe
Hi @jkamdar , Windows could be ok for a lab, not for a production system! First question: is a stand-alone server or a distributed environment? If a stand-alone server it's simple and I can give y... See more...
Hi @jkamdar , Windows could be ok for a lab, not for a production system! First question: is a stand-alone server or a distributed environment? If a stand-alone server it's simple and I can give you some tips: start from the same Splunk Version, copy the apps from the old to the new one, modify eventual monitor inputs using the new path If instead it's a distributed environment, you can copy the indexes.conf files in one app containing all the indexes definitions, and all the apps in the search Heads. For the cluster or distributed search configurations,it's easier start as a new infrastructure, configuring all the connections. These are few pillows but the easiest way is to start from the beginning copying one by one the indexes files. The main issue is to migrate data. Ciao. Giuseppe
I have a search that searches 2 different indexes. We expect that there is 1 record from each index for a single id. The search is pretty simple:   index=index1 | rename Number__c as EventId | app... See more...
I have a search that searches 2 different indexes. We expect that there is 1 record from each index for a single id. The search is pretty simple:   index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count by EventId | search count < 2   What i would like to do now is evaluate that there is a single record from each index for each EventId, to ensure that the count of 2 isn't 2 records in a single index. There are times where, in index2, a single EventId has more than one record which makes the count inaccurate because it's not evaluating whether there was a record for it in index1.  
Hi,  We have installed track-me in our splunk cloud for log and host monitoring. I have setup alerts for few sourcetype tracking if no logs reports to splunk for an hour. Now, what I want to under... See more...
Hi,  We have installed track-me in our splunk cloud for log and host monitoring. I have setup alerts for few sourcetype tracking if no logs reports to splunk for an hour. Now, what I want to understand is, if an alert has been triggered and the issue has been taken care, how do we acknowledge the alert. I am unfamiliar with the UI of the trackme. My version is: 2.1.7 The one I have make in circle is no of alerts which has triggered. If lets say the issue is fixed for one of the sourcetype. But the number is still showing as 4. Could some one please explain.
Does the axis show todays date but no data fill? or is it that the axis is cut off as well?