All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Got a search like this (I've obfuscated it a bit) | tstats count where index IN (index1, index2, index3) by _time , host | where match(host,"^.*.device.mycompany.com$") Got a great looking stats... See more...
Got a search like this (I've obfuscated it a bit) | tstats count where index IN (index1, index2, index3) by _time , host | where match(host,"^.*.device.mycompany.com$") Got a great looking stats table - and Im really pleased with the performance of tstats - awesome. I want to graph the results... easy right?  well no - I cannot for the life of me seem to break down a say, 60 minute span down by host, despite the fact I got this awesome oven ready totally graphable stats table so I am trying  | tstats count where index IN (index1, index2, index3) by _time , host | where match(host,"^.*.device.mycompany.com$") | timechart count by host but the count is counting the host, whereas I want to "count the count" ?  Any ideas?  this will be a super simple one I expect - I got a total mental block on this
You probably have putty or other ssh client installed on your windows? If not please install it. It has sftp/scp client which you could use for copy that file to Linux.
I have installed Splunk Enterprise free trial into a VM as a root user. I know the best practice is to avoid using root to run as Splunk in case the underlying OS gets compromised and then the hacker... See more...
I have installed Splunk Enterprise free trial into a VM as a root user. I know the best practice is to avoid using root to run as Splunk in case the underlying OS gets compromised and then the hacker has access to your OS with root level. I am following the doc online and it says once you install Splunk as root, don't start the Splunk installation but rather add a new user and then change ownership of the Splunk folder to that new non-root user   But before I do that, when Splunk is installed I check its ownership and it's already configured to Splunk. Does this mean Splunk has already configured a non-root user automatically upon installation?   If so, how would I make sure it has read access to local files I want to monitor?
Yes you should install it as you are root user, but then you should chown it as splunk (or other non root user). Then enable it start as that user.
"topic" is not a recognized value for the SOURCE_KEY field.  Try using these transforms: [setindexHIGH] SOURCE_KEY = _raw REGEX = ("topic":\s*"audits") DEST_KEY = _MetaData:Index FORMAT = imp_high
I believe it doesn't show correct results. I mean, sometimes it shows one event in count for source IP, I presume should be min 5. Or I missed something?
@richgalloway , What will happen ? how do we install then ?
What is incorrect about the first search?
Hi, How do we copy the files .tgz from windows server to the Linux box ? Can anyone help me in doin this?  
Hello!! Thank you for your response! And I'm sorry I explained myself so poorly! spath does not work: What I meant with this was, having the previous event string as an example, I am unable to use ... See more...
Hello!! Thank you for your response! And I'm sorry I explained myself so poorly! spath does not work: What I meant with this was, having the previous event string as an example, I am unable to use SPL queries such as index="my_index" logid="log_id_here" service="service_here" responseMessage="response_message_here" instead I gotta use index="my_index" "log_id_here" "service_here" "response_message_here" or index="my_index" "log_id_here" service logid responseMessage This is because no data is found when using "variables" such as  responseMessage="response_message_here" Instead I must search for specific string fragments within the event outputs... This is because the output is formatted as string instead of json making the SPL query creation a real pain.   What is your query: One example would be to individually get each responseMessage as such:  index="my_index" "log_id_here" logid service responseMessage \\\"responseMessage\\\" : \\\"null\\\" Instead of the normal way which would be index="my_index" logid="log_id_here" service responseMessage | stats count by responseMessage | dedup responseMessage   What results do I expect: Currently I'm trying to get unique services and order them desc based on the error count for each (which is based on the responseMessage) What results do I get: Currently I'm able to get the count of each service by using string literals such as \\\"service\\\" :  \\\"desk\\\" , other than that I'm stuck here. (I'm guessing this could  be done with something like  index="my_index" "logid" | stats count by service, responseMessage | eval isError=if(responseMessage!="success",1 ,0) | stats sum(isError) as errorCount by service I apologize in advance in case I've missed once again important details or if i've given wrong queries, I haven't been able to try them out as the documentation shows :C thank you very much for your time!!
Yes, but it is not recommended.
I left out an important step.  Please try my revised answer.
Hi I’m trying to create two searches and having some problems. I hope somebody could help me with this. 1. 7 or more IDS Alerts from a single IP Address in one minute. I created something like bel... See more...
Hi I’m trying to create two searches and having some problems. I hope somebody could help me with this. 1. 7 or more IDS Alerts from a single IP Address in one minute. I created something like below, but it doesn’t seem to be working correctly: index=ids | streamstats count time_window=1m by src_ip | where count >=7 | stats values(dest_ip) as "Destination IP" values(attack) as "Attack" values(severity) as "Severity" values(host) as "FW" count by "Source IP" 2. 5 or more hosts in 1h attacked with the same IDS Signature This seems to be even more complex as it has 3 conditions: 5 hosts 1h The same IPS signature So, I’m not sure how to even start after failing first one. Could somebody help me with this please?
I am trying to use a table column for a drilldown but not display it. In XML dashboards I could do it by specifying: ``` <fields>["field1","field2"...]</fields> ``` I would then still be able... See more...
I am trying to use a table column for a drilldown but not display it. In XML dashboards I could do it by specifying: ``` <fields>["field1","field2"...]</fields> ``` I would then still be able to use field3 in setting drilldown tokens. How can I do this in Dashboard Studio? I can't find a way to hide a column without removing the ability to then also refer to it in tokens.
Let's say that I have a dashboard A containing a table. App Name App Host LinkToB LinkToC LinkToD abc host 1 LinkToB LinkToC LinkToD def host 2 LinkToB LinkToC LinkToD xyz h... See more...
Let's say that I have a dashboard A containing a table. App Name App Host LinkToB LinkToC LinkToD abc host 1 LinkToB LinkToC LinkToD def host 2 LinkToB LinkToC LinkToD xyz host 1 LinkToB LinkToC LinkToD   I have 3 other dashboards (B,C,D). I want to click "LinkToX" to link to X dashboard. However, in Splunk Dashboard Studio UI, I can only link the table to one dashboard. Is there any way to configure JSON to make the table able to link to multiple dashboard? Or is there any way to make cells in table become clickable URL link instead? Thank you!
Thanks all you guys, you gave me a lot of hints and you have been very helpfull
Hey there! I think you should be all set now (please let me know if not) but I wanted to respond here in case anyone comes across this in the future. We did have an issue that week with the develope... See more...
Hey there! I think you should be all set now (please let me know if not) but I wanted to respond here in case anyone comes across this in the future. We did have an issue that week with the developer license request system, which has now been addressed. This was complicated by the US holiday that week, which is the reason for the delayed response to your email as well. In general, reaching out to devinfo@splunk.com is the right course of action. If you're not getting a response there (sometimes that ends up being a spam filter issue), you can also reach out in the #appdev channel on the Splunk usergroup slack for assistance.
Scripted inputs yes, scripted alert actions maybe not.
Thank you for your answer, it helped me out. The final version was a bit more trickier as in the ips field can be an "*" instead of any listed values and in that case any of the found values shou... See more...
Thank you for your answer, it helped me out. The final version was a bit more trickier as in the ips field can be an "*" instead of any listed values and in that case any of the found values should be considered. So this was the final solution:   | makeresults | eval ips="a,c,x" ```| eval ips="*"``` | eval ips=replace(ips, "\*", "%") | map [ | makeresults | append [ makeresults | eval ips="a", label="aaa" ] | append [ makeresults | eval ips="b", label="bbb" ] | append [ makeresults | eval ips="c", label="ccc" ] | append [ makeresults | eval ips="d", label="ddd" ] | eval outer_ips=split("$ips$", ",") | where (ips=outer_ips OR LIKE(ips, "$ips$")) ```with the above conditon when only a * (%) is there as a value it will catch it with the LIKE. when some other value then the first condition will catch the proper events)``` ] maxsearches=10  
Also, let's put the new processor last in the list in your pipeline--that way you can be sure the host.name is set by the resource detection processor.