All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The appendcols command rarely is the right answer to an SPL problem.  You probably want append. index="index" <search> earliest=-4h@h latest=@h |stats latest(FieldA) as DataNew earliest(FieldA) as D... See more...
The appendcols command rarely is the right answer to an SPL problem.  You probably want append. index="index" <search> earliest=-4h@h latest=@h |stats latest(FieldA) as DataNew earliest(FieldA) as DataOld by Field1, Field2, Field 3 |append [search index="index" <search> earliest=-3h@h latest=-1@h |stats latest(FieldA) as DataMidOld earliest(FieldA) as DataMidNew by Field1, Field2, Field3] ``` Re-group the results ``` | stats values(*) as * by Field1, Field2, Field3 |table DataNew,DataMidNew, DataMidOld, DataOld, Field1, Field2, Field3  The biggest problem with appendcols is that it requires the results be in the exact same sequence as those from the main search - otherwise, gibberish can result.
Hi Splunkers,  This is my first post as I am new to using splunk, but my issue arising when I am trying to pull specific values from a time range within one search. To do this I am using appendcols ... See more...
Hi Splunkers,  This is my first post as I am new to using splunk, but my issue arising when I am trying to pull specific values from a time range within one search. To do this I am using appendcols to add another search, and designate a new value for earliest and latest, then use the "stats latest (field) as 'name' by field, field" command to pull these values out. Here is an example query: index="index" <search> earliest=-4h@h latest=@h |stats latest(FieldA) as DataNew earliest(FieldA) as DataOld by Field1, Field2, Field 3 |appendcols [search index="index" <search> earliest=-3h@h latest=-1@h |stats latest(FieldA) as DataMidOld earliest(FieldA) as DataMidNew by Field1, Field2, Field3] |table DataNew,DataMidNew, DataMidOld, DataOld, Field1, Field2, Field3 In my mind, I see no error with this search, but the values for DataMidOld and DataMidNew do not align with the actual data, and are seemingly random. Any help is appreciated!
On UF you shouldn’t have splunktcp input in inputs.conf. The only exception is when you are using UF as an intermediate forwarder, but this is different story.
One more question @ITWhisperer , how can we ignore the  bunch of letters coming in alphanumeric words delimited by hyphen . Example: CSR-345sc453-a2da-4850-aacb-7f35d5127b21 - Sending error respons... See more...
One more question @ITWhisperer , how can we ignore the  bunch of letters coming in alphanumeric words delimited by hyphen . Example: CSR-345sc453-a2da-4850-aacb-7f35d5127b21 - Sending error response back in 2136 msecs. Expected output - CSR Sending error response back in msecs  OR  Sending error response back in msecs Regex shared by you  is including "aacb" also  but we want to ignore it.  Requirement is to extract the statement without any correlation/context id so as to uniquely identify error statement. 
Admin is same as root *nix world. You could try different tricks to restrict what it can do, but there is always a way to avoid those restrictions! To be honest your company must implement policies ... See more...
Admin is same as root *nix world. You could try different tricks to restrict what it can do, but there is always a way to avoid those restrictions! To be honest your company must implement policies which are mandatory and if someone doesn’t follow it then there is some consequences for those. Otherwise there will be always some surprises time by time. Of course there should be some other ways to motivate your colleagues first to understand why there is policies and why everyone must following those.
Thanks @ITWhisperer  & @bowesmana  for all your help!
That directory probably contains older jQuery files.
We can't see what might be wrong with your search if we can't see the actual events the search is running against. Please share some anonymised events which demonstrate the issue you are facing.
Hi @Brandon.Camp, I wanted to follow up to see if you saw Mario's reply. If the reply helped, please click the "Accept as Solution" button. If not, please keep the conversation going and reply to t... See more...
Hi @Brandon.Camp, I wanted to follow up to see if you saw Mario's reply. If the reply helped, please click the "Accept as Solution" button. If not, please keep the conversation going and reply to this thread.
Alrighty, made an inputs.conf in my splunkuniversalforwarder/etc/system/local that looks like this: [default] host = "computername" [splunktcp:9997] connection_host = ip [WinEventLog://Security]... See more...
Alrighty, made an inputs.conf in my splunkuniversalforwarder/etc/system/local that looks like this: [default] host = "computername" [splunktcp:9997] connection_host = ip [WinEventLog://Security] disabled=0 current_only=1 blacklist1=5447 I'll bother you in a bit to see if it worked haha. I really appreciate your help!
I am not getting it.   you want me to share dashboard output?
Hi @varsh_6_8_6 , you can find many anwers to this question and I proposed it in Splunk ideas to have this feature in fashboards and it's a future prospect, if you think that's interesting, upvote f... See more...
Hi @varsh_6_8_6 , you can find many anwers to this question and I proposed it in Splunk ideas to have this feature in fashboards and it's a future prospect, if you think that's interesting, upvote for it at https://ideas.splunk.com/ideas/EID-I-572  Anyway, in the meantime, only one question before the answer: is messageValue a number or what else? if it's a number, please try: index="xyz" host="*" "total payment count :" | eval messagevalue=mvindex(split(messagevalue,":"),1) | stats latest(messagevalue) AS messagevalue | append [ | makeresults | eval messagevalue=0 | fields messagevalue ] | stats sum(messagevalue) AS messagevalue Ciao. Giuseppe
The following is my query. index="xyz"  host="*"  |fields host,messagevalue | search "total payment count :" |eval messagevalue=mvindex(split(messagevalue,":"),1) |stats latest(messagevalue) ... See more...
The following is my query. index="xyz"  host="*"  |fields host,messagevalue | search "total payment count :" |eval messagevalue=mvindex(split(messagevalue,":"),1) |stats latest(messagevalue) For a given period if there are no events, No results found is displayed. Instead I want zeo to be displayed. I tried using fillnull but no luck.
if my answer, answered your question please "Accept it as Solution". If it helped you anyway, kindly upvote!!!
Agreed, if it is UI issue, then go with Splunk support case.
Son of a...lol I'll give that a shot and get back with you. Thanks for the replies!
Again - close, but no banana But seriously, you need to filter on input on those UFs. Not on output. So you must add those settings to inputs.conf on UFs. Your UFs are not outputting events to E... See more...
Again - close, but no banana But seriously, you need to filter on input on those UFs. Not on output. So you must add those settings to inputs.conf on UFs. Your UFs are not outputting events to EventLog
As others already stated - it's a bit vague requirement. "Gather all data and present it in a readable format" at first glance reads for me as "print all raw events Splunk is receiving" which is kin... See more...
As others already stated - it's a bit vague requirement. "Gather all data and present it in a readable format" at first glance reads for me as "print all raw events Splunk is receiving" which is kinda impossible for a human to read and a bit pointless too. If you want to get some aggregate to gather insight what _kinds_ of data and _where from_ Splunk is getting data you'll have to be a bit creative since - as you already noticed, if you simply do an overall tstats with split by source, sourcetype and host, you'll get a load of results but they will also make not much sense. You need to do some "half-manual" filtering like aggregating file sources by path or even overall by sourcetype. How much of it you have to do will vary depending on your actual data. In some cases you can simply do some tweaking with SPL, maybe matching some sources to regex, maybe just adding all sources or all hosts by sourcetype... In smaller cases you might just get away with exporting results to CSV and a bit of manual tweaking in Excel to get the reasonable results.
Right on. I added the blacklist to my machine's UF outputs.conf last night as there wasn't an inputs.conf. I checked this morning and that event is still coming through. This is what the outputs.conf... See more...
Right on. I added the blacklist to my machine's UF outputs.conf last night as there wasn't an inputs.conf. I checked this morning and that event is still coming through. This is what the outputs.conf looks like.   [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = "ip of splunk":9997 [WinEventLog://Security] disabled=0 current_only=1 blacklist = 5447  
Hi @Cyber_Shinigami , in Splunk, the main question is: what do you want to display? do you want a list of sourcetypes or a list of hosts? I suppose, but it's only an idea of mine, that an executiv... See more...
Hi @Cyber_Shinigami , in Splunk, the main question is: what do you want to display? do you want a list of sourcetypes or a list of hosts? I suppose, but it's only an idea of mine, that an executive, is mainly interested to the kind of main data indexed, so I'd display some grouped informations like the number of different hosts:   | tstats dc(host) AS host_count count where index=* by sourcetype | sort -count | head 10   You could also eventually add a lookup that translates the sourcetypes in more comprehensible description: e.g. cp_log -> "CheckPoint Logs" or fgt_logs -> "Fortinet Logs". Ciao. Giuseppe