All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Two separate stats commands are unlikely to work because they're transforming commands.  That means the seconds stats won't have the same fields to work with as the first one.  One alternative is to ... See more...
Two separate stats commands are unlikely to work because they're transforming commands.  That means the seconds stats won't have the same fields to work with as the first one.  One alternative is to use eventstats before stats, but it's unnecessary in this case because a single stats can do it all.
I would like for the font size in the table I have made to be much bigger. Currently the largest size you can select in the font size drop down under colour and style is large. How can I make the num... See more...
I would like for the font size in the table I have made to be much bigger. Currently the largest size you can select in the font size drop down under colour and style is large. How can I make the numbers in my table bigger?
Hi,  We currently have a centralized WEF collection server that collects all windows logs across the environment. This includes forwarding sysmon,application,system channels etc... to the collector... See more...
Hi,  We currently have a centralized WEF collection server that collects all windows logs across the environment. This includes forwarding sysmon,application,system channels etc... to the collector. Everything ends up in ForwardedEvents on the WEF collection server. I've installed a UF on this host.  I have the windows TA deployed with the following input stanza       #[WinEventLog://ForwardedEvents] #disabled = 0 #index = wef #start_from = oldest #current_only = 0 #batch_size = 50 #checkpointInterval = 15 #renderXml=true #host=WinEventLogForwardHost       I have 2 problems currently.  The splunk universal forwarder doesn't appear to be keeping up with the number of windows event logs coming to the WEF collector. ~1000 hosts. Another (different) SIEM collector for WEF keeps up fine on the same host and collects all logs. i'm able to compare what one collector is collecting vs the Splunk UF. I've tried adjusting the batch_size and checkpoint interval as above.   I want to split certain windows channels in the ForwardedEvents channel to different indexes. I have tried deploying the microsoft sysmon TA and adding a new input with the following configuration.       #[WinEventLog://ForwardedEvents] #disabled = true #index = wef-sysmon #start_from = oldest #current_only = 0 #batch_size = 50 #checkpointInterval = 15 #renderXml=true #host=WinEventLogForwardHost #whitelist = $XmlRegex='Microsoft-Windows-Sysmon'​       i then add  blacklist = $XmlRegex='Microsoft-Windows-Sysmon' to the windows TA. Then everything seems to stop. I stop receiving all events on my indexer. I've also tried adding multiple inputs with differing indexes and whitelist/blacklists in the windows TA to no avail. Would someone be able to point me in the right direction?      
I work in the Healthcare industry and our customer base can have product versions that range from 6 to 18.  For this dashboard, sites with versions less than 15 I have to use one data source.  Sites ... See more...
I work in the Healthcare industry and our customer base can have product versions that range from 6 to 18.  For this dashboard, sites with versions less than 15 I have to use one data source.  Sites that have versions 15 and over, I have a different set of data sources.   For this dashboard, I have one query for versions below 15 and another query for version 15 and above.  I have built a dropdown that lists the Site Name for choices.  There is also a time picker to choose date ranges.  In order to choose the correct query to run, I need to somehow pass the product version so it knows which one to run and display.  How do I create the product version as a token to pass down to decide which query to use?   Here is the start of my dashboard code.  Below it is just the two queries I will be choosing from. <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="propertyId" searchWhenChanged="false"> <label>Site</label> <fieldForLabel>FullHospitalName</fieldForLabel> <fieldForValue>propertyId</fieldForValue> <search> <query>| inputlookup HealthcareMasterList.csv | search ITV=1 AND ITV_INSTALLED&gt;1 | table propertyId FullHospitalName MarinaVersion | join type=left propertyId [ search sourcetype=sysconfighost-v* [| inputlookup HealthcareMasterList.csv | search ITV=1 AND ITV_INSTALLED&gt;1 | fields propertyId | format] | dedup propertyId hostId sortby -dateTime | stats max(coreVersion) as coreVersion by propertyId] | eval version=if(isnull(coreVersion),MarinaVersion,coreVersion) | eval version=substr(version,1,2) | fields - MarinaVersion coreVersion | sort FullHospitalName</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="time" token="field1" searchWhenChanged="false"> <label>Date Picker</label> <default> <earliest>-1mon@mon</earliest> <latest>@mon</latest> </default> </input> </fieldset> With the query above I end up with three fields:  propertyId, FullHospitalName, version. The 'FullHospitalName' is what is displayed in the dropdown.   The 'propertyId' is what needs to be passed to the query itself to know what data to collect.  How do I use the 'version' field to determine which of the two queries to use?
How can I make it show me only what appears as null in the Call.CallForwardInfo.OriginalCalledAddr field? Right now I have this result, you can help me.  
Oh, that could explain it. I'll try to erase all events, clean the data partition of the instance entirely and restart clean, to see if the behavior is the same. Thanks for your help !
There is another possible explanation. Someone was trigger-happy with the delete command. Deleted events are physically still in the index files so eventcount sees them but are marked as not searcha... See more...
There is another possible explanation. Someone was trigger-happy with the delete command. Deleted events are physically still in the index files so eventcount sees them but are marked as not searchable so tstats (and other search commands) don't use them.
  Does anyone have AWS EC2 instance dashboard sample? Also I am looking for EC2 instance OS/EBS/networking error code list to build the dashboard and query. Thanks, Muhammad
That worked great! I was trying to use two different 'stats' and could not get both of the values.   Thanks for your help!!
I have another suspicion - you have an indexer cluster, right? I forgot to mention it ! I'm currently running a standalone instance, not connected to anything else. I checked just in case, but... See more...
I have another suspicion - you have an indexer cluster, right? I forgot to mention it ! I'm currently running a standalone instance, not connected to anything else. I checked just in case, but the monitoring console of the instance does see the 160 million events, on the local instance, without replication. I also checked the inputs, and it is consistent with the returned number. What's more confusing is that the events seems to be "seen" by some commands, but not others. For example, I tried to directly search "index=XXX host=YYY sourcetype=ZZZ" (so every field used should be indexed and retrievable even without search time extractions, and should not conflict with anything), and that search returns 2300 events over multiples hosts. If I pipe a "| stats count by host" behind it, the search returns 0, and doesn't see any events. I don't know why, but there seems to be a part of my events I cannot aggregate against. That would explain the inconsistency, but as for the root cause, I'm at a loss of words.
Actually it's not that much of an outlier. Assuming 2s split between first and second event you have 2/3 chance of splitting them into two separate bins.
Tokens ($something$) cannot be used in SPL except in the map command.  They're not necessary, however.  Just use a field. index=windows_logs | eval userid=johnsmith | where in(userid,member_dn, me... See more...
Tokens ($something$) cannot be used in SPL except in the map command.  They're not necessary, however.  Just use a field. index=windows_logs | eval userid=johnsmith | where in(userid,member_dn, member_id, Member_Security_ID, member_user_name)  Notice I changed the where command since it does not support the IN operator.
Now you can tag HEC events for any HEC end point ( including s2s) without paying for third party software. https://community.splunk.com/t5/Getting-Data-In/Splunk-HTTP-Event-Collector-support-for-c... See more...
Now you can tag HEC events for any HEC end point ( including s2s) without paying for third party software. https://community.splunk.com/t5/Getting-Data-In/Splunk-HTTP-Event-Collector-support-for-custom-metadata-tags/m-p/703131/highlight/true#M116292
Try this | stats count as Count, first(Field-B) as Example by Field-A
There are many non-native speakers (including myself) here so don't wory. As long as you're making an effort to be at least somewhat understandable it's great! Every event has several fields or "... See more...
There are many non-native speakers (including myself) here so don't wory. As long as you're making an effort to be at least somewhat understandable it's great! Every event has several fields or "metafields" (like index - it's technically not a field indexed with an event, it's a "selector" but it's treated like a field when you're processing results). And each event has the holy trinity of source, sourcetype and host. I have another suspicion - you have an indexer cluster, right? Quoting https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Eventcount Running in clustered environments Do not use the eventcount command to count events for comparison in indexer clustered environments. When a search runs, the eventcount command checks all buckets, including replicated and primary buckets, across all indexers in a cluster. As a result, the search may return inaccurate event counts.
HaHA, I was answering based on the provided information not trying to work around every single possible outlier  
That was my initial reaction but as usual - binning has issues when your events cross the bin boundary. For example - one event at :11, another at :14. They will be binned into separate buckets and w... See more...
That was my initial reaction but as usual - binning has issues when your events cross the bin boundary. For example - one event at :11, another at :14. They will be binned into separate buckets and won't match.
Hi there, like in most cases a simple 'stats' will do the trick. Try something like this  index=printserver OR index=printlogs | bin _time span=3s | stats values(*) AS * by _time username | table ... See more...
Hi there, like in most cases a simple 'stats' will do the trick. Try something like this  index=printserver OR index=printlogs | bin _time span=3s | stats values(*) AS * by _time username | table _time prnt_name username location directory file   Hope this helps ... cheers, MuS
First of all, English isn't my native language, so I apologize in advance for any error I could write in this support topic. I encounter a problem I'm a bit lost with : I'm indexing a lot of differe... See more...
First of all, English isn't my native language, so I apologize in advance for any error I could write in this support topic. I encounter a problem I'm a bit lost with : I'm indexing a lot of different data with different sourcetypes (mostly CSV and JSON data, but with a bit of unstructured data here and there), and the eventcount and tstats commands are returning a whole lot different count of events. I know the eventcount command doesn't care about the time window, so I tried increasing the time window in the future until the maximum supported by Splunk, but to no avail. To talk numbers, in my instance the command "eventcount index=XXX*  " returns a number of 160 millions events in my indexes. When I try to do a command "| tstats count where index=XXX* by sourcetype", the command only find about 59 millions of events. Even increasing the time window with a "latest=+4824d" to reach the maximum supported by the software doesn't yield more events. I thought about frozen data, so I increased the time window before freezing events just for debugging, deleted all my data, reindexed them all, but to no avail. Is it possible for a event to be indexed without a sourcetype ? Or is there technological wizardry i'm not aware about ?
Join on _time doesn't make sense if the time is off in one of data sets. You noticed it yourself. This transaction doesn't make sense either since you don't have a field called src which could conta... See more...
Join on _time doesn't make sense if the time is off in one of data sets. You noticed it yourself. This transaction doesn't make sense either since you don't have a field called src which could contain one of those strings. If this is indeed all the data you have it's a very tricky problem in general. Because what if the same user requests two print jobs within a second? How can you tell which one went to which printer if the only common field is the username? If you can make some assumptions about the data the problem can be solved one way or another. You can use transaction on username field and indeed maxspan=3s or something like that (maybe add a startswith="index=printserver" endswith="index=printlogs". But transaction is a relatively resource-intensive command and is best avoided if possible. So if you can make some other assumptions maybe the solution could be better.