All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm sorry, I think I put it in the wrong place. We're using Splunk Cloud, so this solution (ACS) will probably work. I'll update when I worked on it to confirm it works for my needs.
Based on group where you have put this question You are doing this on Splunk Enterprise not in Splunk Cloud? ACS is working only with cloud, not with Enterprise. In Enterprise you need to have CLI ... See more...
Based on group where you have put this question You are doing this on Splunk Enterprise not in Splunk Cloud? ACS is working only with cloud, not with Enterprise. In Enterprise you need to have CLI access into node and then you can script it. E.g. ansible is good tool to manage installations. You could have control node where you get packages/apps from git and then install those with ansible-play.
@gcusello have you try to add _meta tag in your HF/UF's inputs.conf and put that information there? I think that this could solve your needs?
Those messages are quite normal and not describe what issues you have. Have you try e.g. nc or curl to check, if master is listening peers and response anything? Is pass4symKey working or are there ... See more...
Those messages are quite normal and not describe what issues you have. Have you try e.g. nc or curl to check, if master is listening peers and response anything? Is pass4symKey working or are there any messages for it in _internal? btw when you post logs, please use block element </> where you paste those lines. It's much easier to read and we can be sure that those are what you have pasted. If the connection between master and peer is working there are lot of messages in _internal.
Hi Based on these conf files it seems to do next. Take timestamp from beginning of event and put it into _time Ensure that lines are not longer than 10000 characters  syslog-host transformation ... See more...
Hi Based on these conf files it seems to do next. Take timestamp from beginning of event and put it into _time Ensure that lines are not longer than 10000 characters  syslog-host transformation is missing, so I cannot tell what it do! extract hostname from event and save it into metadata to use on next step define used index based on hostname (fqdn) on event. Fqdn vs index is defined on that csv lookup file Change \r\n newline to just \n  Don't generate punctuation for event More detailed information from those links which @PaulPanther add in his post. r. Ismo
Sorry for the late reply. As I've changed my mail over the years, I don't receive email notifications from replies. Here's the app: https://github.com/skalliger/encryption_and_vulnerability_check  
I am confused on why I only get _ID's from my Salesforce ingest, for example, I am not getting Username, Profile Name, Dashboard Name, Report Names...etc...I am getting the User ID, Profile ID, Dashb... See more...
I am confused on why I only get _ID's from my Salesforce ingest, for example, I am not getting Username, Profile Name, Dashboard Name, Report Names...etc...I am getting the User ID, Profile ID, Dashboard ID, and so fourth, it makes searches really difficult...How am I to correlate the ID to readable relevant information.? Where User_ID equates to Username (Davey Jones)? Help Please. 
Hi @Alex.Nyago, Thanks for asking your question on the Community. Did you happen to find a solution or any new information you can share here? If you still need help, please contact AppDynamics ... See more...
Hi @Alex.Nyago, Thanks for asking your question on the Community. Did you happen to find a solution or any new information you can share here? If you still need help, please contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
Hi @Hector.Arredondo, I reached out to your AppDynamics CSM. They should be in touch with you to talk more about this. 
Then you'll want to look at the schedule setting, which defaults to running the script at startup.
Thank you @bowesmana . With the Time Selector set to Year to date, and not using the earliest command | timechart span=1mon count Results in 2024 as expected. Then using the following, I end up w... See more...
Thank you @bowesmana . With the Time Selector set to Year to date, and not using the earliest command | timechart span=1mon count Results in 2024 as expected. Then using the following, I end up with a timeline of 2024, but the data claiming it's 2023. But is for sure 2024 data, labeled as 2023. | timechart span=1mon count | timewrap 1y series=exact time_format=%Y  
Not sure if this will help but here's a couple things I noticed.  In your original question, you have the word "Log" capitalized but in the syntax it is not.  Could that be why it's not working?  I a... See more...
Not sure if this will help but here's a couple things I noticed.  In your original question, you have the word "Log" capitalized but in the syntax it is not.  Could that be why it's not working?  I also noticed that in your question the words "INFO" and "WARNING" are all capitalized but "Error" is not but you have it as "ERROR" in the syntax.   I often have spelling mistakes in my code that I don't catch right away so thought I'd offer that up as a suggestion.  Good luck!
Real-time searches lock up cpus and should probably be avoided. You should ask yourself (and your users) how urgently do you need an alert? What is the maximum tolerable time between the event occurr... See more...
Real-time searches lock up cpus and should probably be avoided. You should ask yourself (and your users) how urgently do you need an alert? What is the maximum tolerable time between the event occurring and user y being sent an email? How quickly does y need to be able to react? Are they sitting waiting for the email to come in? How quickly does the notification get stale? Basically, buy yourself as much time as possible and then schedule your searches based on this, otherwise you will end up burning resources frequently checking for events that don't happen very often.
Thank you for sharing sample data.  This reveals additional weaknesses in the pursuit. ORDERS seems to be the ID that comes after TransNum, not extracted by the original regex at all.  Sample data ... See more...
Thank you for sharing sample data.  This reveals additional weaknesses in the pursuit. ORDERS seems to be the ID that comes after TransNum, not extracted by the original regex at all.  Sample data also show contradiction with your original index search.  But that is more for you to fine tune. Part of the event is structured in JSON.  This should be treated as a structure not literal strings.  Extraction using regex is instable. Based on your sample events (which suggest that the source is exactly the same, therefore subsearch is really a bad approach), this would be a much better strategy   index=source (("status for" "Not available") OR "Request for") | rex "TransNum: (?<ORDERS>\S+) .*?(?<JSON>{.+})" | spath input=JSON path=products{} | mvexpand products{} | spath input=products{} | stats values(uniqueid) as uniqueid by ORDERS   (Note the index search is purely based on sample data.  You may need to tune it to actually include the correct events.)  Your sample data will give you ORDERS uniqueid 629f2ad QSTRUJIK Here is an emulation of your data. Play with it and compare with real data and refine your search strategy   | makeresults | fields - _* | eval data = mvappend("INFO [pool-9-thread-3] CLASS_NAME=Q, METHOD=, MESSAGE=response status for TransNum: 629f2ad - 400 | Response - {\"code\":0001,\"message\":\"Not available\",\"messages\":[],\"additionalTxnFields\":[]}", "INFO [pool-9-thread-7] CLASS_NAME=Q, METHOD=, MESSAGE=Request for TransNum: 629f2ad - {\"address\":{\"billToThis\":true,\"country\":\"\",\"email\":\"******************\",\"firstname\":\"FN\",\"lastname\":\"LN\",\"postcode\":\"0\",\"salutation\":null,\"telephone\":\"+999999999999\"},\"deliveryMode\":\"\",\"payments\":[{\"amount\":10,\"code\":\"BFD\"}],\"products\":[{\"currency\":356,\"price\":600,\"qty\":2,\"uniqueid\":\"QSTRUJIK\"}],\"refno\":\"629f2ad\",\"syncOnly\":true}") | mvexpand data | rename data as _raw | extract ``` the above emulates index=source (("status for" "Not available") OR "Request for") ```    
Hi, This error is seems to occur with older versions of helm. Can you please confirm your version of helm and see if it's possible to update to a current version?
@uagraw01 that is by splunk's default user role and recommended as best practices. That works with rest_properties_get but if you remove that, you will have different issues, I do not recommend that.... See more...
@uagraw01 that is by splunk's default user role and recommended as best practices. That works with rest_properties_get but if you remove that, you will have different issues, I do not recommend that. You have different ones which are not needed there like Data inputs, Tokens Server Settings these should be handled by admin. Typical Splunk user role native capabilities. If this helps, please Upvote. 
I am looking to build an alert that sends an email if someone locks themselves out of their Windows account after so many attempts.  I built a search out but when using real time I was bombarded with... See more...
I am looking to build an alert that sends an email if someone locks themselves out of their Windows account after so many attempts.  I built a search out but when using real time I was bombarded with emails every few minutes which doesn't seem right.  I would to have the alert be setup such as if user x types the password in 3 times wrong and AD locks the account, send an email to y's email address.  Real time alerting seems to be what I would need but it bombards me way too much.
Thank you for your reply, we hear from support that interval attribute can be used in [script] but not in [powershell].     
Check the scripted inputs for those with interval=-1.  That tells Splunk to run the script at startup.
Has anyone figured out how to run powershell only at scheduled time? In addition to scheduled time, it is running everytime the forwarder is restarted.