All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I believe it doesn't show correct results. I mean, sometimes it shows one event in count for source IP, I presume should be min 5. Or I missed something?
@richgalloway , What will happen ? how do we install then ?
What is incorrect about the first search?
Hi, How do we copy the files .tgz from windows server to the Linux box ? Can anyone help me in doin this?  
Hello!! Thank you for your response! And I'm sorry I explained myself so poorly! spath does not work: What I meant with this was, having the previous event string as an example, I am unable to use ... See more...
Hello!! Thank you for your response! And I'm sorry I explained myself so poorly! spath does not work: What I meant with this was, having the previous event string as an example, I am unable to use SPL queries such as index="my_index" logid="log_id_here" service="service_here" responseMessage="response_message_here" instead I gotta use index="my_index" "log_id_here" "service_here" "response_message_here" or index="my_index" "log_id_here" service logid responseMessage This is because no data is found when using "variables" such as  responseMessage="response_message_here" Instead I must search for specific string fragments within the event outputs... This is because the output is formatted as string instead of json making the SPL query creation a real pain.   What is your query: One example would be to individually get each responseMessage as such:  index="my_index" "log_id_here" logid service responseMessage \\\"responseMessage\\\" : \\\"null\\\" Instead of the normal way which would be index="my_index" logid="log_id_here" service responseMessage | stats count by responseMessage | dedup responseMessage   What results do I expect: Currently I'm trying to get unique services and order them desc based on the error count for each (which is based on the responseMessage) What results do I get: Currently I'm able to get the count of each service by using string literals such as \\\"service\\\" :  \\\"desk\\\" , other than that I'm stuck here. (I'm guessing this could  be done with something like  index="my_index" "logid" | stats count by service, responseMessage | eval isError=if(responseMessage!="success",1 ,0) | stats sum(isError) as errorCount by service I apologize in advance in case I've missed once again important details or if i've given wrong queries, I haven't been able to try them out as the documentation shows :C thank you very much for your time!!
Yes, but it is not recommended.
I left out an important step.  Please try my revised answer.
Hi I’m trying to create two searches and having some problems. I hope somebody could help me with this. 1. 7 or more IDS Alerts from a single IP Address in one minute. I created something like bel... See more...
Hi I’m trying to create two searches and having some problems. I hope somebody could help me with this. 1. 7 or more IDS Alerts from a single IP Address in one minute. I created something like below, but it doesn’t seem to be working correctly: index=ids | streamstats count time_window=1m by src_ip | where count >=7 | stats values(dest_ip) as "Destination IP" values(attack) as "Attack" values(severity) as "Severity" values(host) as "FW" count by "Source IP" 2. 5 or more hosts in 1h attacked with the same IDS Signature This seems to be even more complex as it has 3 conditions: 5 hosts 1h The same IPS signature So, I’m not sure how to even start after failing first one. Could somebody help me with this please?
I am trying to use a table column for a drilldown but not display it. In XML dashboards I could do it by specifying: ``` <fields>["field1","field2"...]</fields> ``` I would then still be able... See more...
I am trying to use a table column for a drilldown but not display it. In XML dashboards I could do it by specifying: ``` <fields>["field1","field2"...]</fields> ``` I would then still be able to use field3 in setting drilldown tokens. How can I do this in Dashboard Studio? I can't find a way to hide a column without removing the ability to then also refer to it in tokens.
Let's say that I have a dashboard A containing a table. App Name App Host LinkToB LinkToC LinkToD abc host 1 LinkToB LinkToC LinkToD def host 2 LinkToB LinkToC LinkToD xyz h... See more...
Let's say that I have a dashboard A containing a table. App Name App Host LinkToB LinkToC LinkToD abc host 1 LinkToB LinkToC LinkToD def host 2 LinkToB LinkToC LinkToD xyz host 1 LinkToB LinkToC LinkToD   I have 3 other dashboards (B,C,D). I want to click "LinkToX" to link to X dashboard. However, in Splunk Dashboard Studio UI, I can only link the table to one dashboard. Is there any way to configure JSON to make the table able to link to multiple dashboard? Or is there any way to make cells in table become clickable URL link instead? Thank you!
Thanks all you guys, you gave me a lot of hints and you have been very helpfull
Hey there! I think you should be all set now (please let me know if not) but I wanted to respond here in case anyone comes across this in the future. We did have an issue that week with the develope... See more...
Hey there! I think you should be all set now (please let me know if not) but I wanted to respond here in case anyone comes across this in the future. We did have an issue that week with the developer license request system, which has now been addressed. This was complicated by the US holiday that week, which is the reason for the delayed response to your email as well. In general, reaching out to devinfo@splunk.com is the right course of action. If you're not getting a response there (sometimes that ends up being a spam filter issue), you can also reach out in the #appdev channel on the Splunk usergroup slack for assistance.
Scripted inputs yes, scripted alert actions maybe not.
Thank you for your answer, it helped me out. The final version was a bit more trickier as in the ips field can be an "*" instead of any listed values and in that case any of the found values shou... See more...
Thank you for your answer, it helped me out. The final version was a bit more trickier as in the ips field can be an "*" instead of any listed values and in that case any of the found values should be considered. So this was the final solution:   | makeresults | eval ips="a,c,x" ```| eval ips="*"``` | eval ips=replace(ips, "\*", "%") | map [ | makeresults | append [ makeresults | eval ips="a", label="aaa" ] | append [ makeresults | eval ips="b", label="bbb" ] | append [ makeresults | eval ips="c", label="ccc" ] | append [ makeresults | eval ips="d", label="ddd" ] | eval outer_ips=split("$ips$", ",") | where (ips=outer_ips OR LIKE(ips, "$ips$")) ```with the above conditon when only a * (%) is there as a value it will catch it with the LIKE. when some other value then the first condition will catch the proper events)``` ] maxsearches=10  
Also, let's put the new processor last in the list in your pipeline--that way you can be sure the host.name is set by the resource detection processor.
Hi, I suggest putting "system" at the end of the resource detection list the way you had it originally. Also, in the new processor, please check indentation--it looks like there are extra spaces.  ... See more...
Hi, I suggest putting "system" at the end of the resource detection list the way you had it originally. Also, in the new processor, please check indentation--it looks like there are extra spaces.  Next, if everything is working OK, you may not see the "azure_resource_name" appear in the infrastructure navigator until the old MTS(metric time series) ages out (approx 25 hours). You can confirm it's working though by going to Metric Finder and search for "cpu.utilization". This will open a new chart for cpu.utilization--click the "data table" to see the raw data and dimensions. Your azure vm should be listed twice at this point--confirm that one of the listings has a dimension "azure_resource_name" and that it looks correct. If you see it there, you'll just need to wait ~25 hours for the infrastructure navigator to not use the old MTS.
@PickleRick  it didnt work too
Hi , unfortunately it didnt work . Appeared as follows: ID_SERVICE                                                               SERVICE_TYPE            TIMESTAMP id_service_value1,servicetype_v... See more...
Hi , unfortunately it didnt work . Appeared as follows: ID_SERVICE                                                               SERVICE_TYPE            TIMESTAMP id_service_value1,servicetype_value1    <blank_value>                   <timestamp_ok> id_service_value2,servicetype_value2    <blank_value>                   <timestamp_ok>   The timestamp was duplicated successfully, the values were broken as expected but they added a 'comma' , colon. and left the next column blank.  Additionally, the mvexpand added considerably time for execution, it was performing really fast, and the performance decreased :(. Even that I appreciate your time and response for helping me @richgalloway !
Can we install as a root ?
Splunk will happily run scripted input from $SPLUNK_HOME/etc/apps/<app>/bin - I run a powershell input this way.