As your not an expert then it might be better for you to explore Splunks Add-on builder which will have options to create what you need and with credentials, have a look at the below as it may help. ...
See more...
As your not an expert then it might be better for you to explore Splunks Add-on builder which will have options to create what you need and with credentials, have a look at the below as it may help. https://docs.splunk.com/Documentation/AddonBuilder/4.2.0/UserGuide/CreateAlertActions https://docs.splunk.com/Documentation/AddonBuilder/4.2.0/UserGuide/ConfigureDataCollection
So, I have data like this after I ran a query. For each aggregator, if the aggregator_status is Error and before15 minutes, the aggregator_status becomes Up, alert should not run. But, if the a...
See more...
So, I have data like this after I ran a query. For each aggregator, if the aggregator_status is Error and before15 minutes, the aggregator_status becomes Up, alert should not run. But, if the aggregator_status is still Error or no new event comes, alert should trigger. The Time field is epoch time which I am thinking can be used to find difference in Up and Error status times. How do I create such a query for the alert? I am thinking of using foreach command or some sort of streamstats, but I am unable to resolve this issue. The alert needs to run once every 24 hours.
Hi Deepak C Thank you so much for you kind and prompt reply. It's more than appreciated. Splunk has been setup to extract the logs and get all the needed information from AD event logs including ev...
See more...
Hi Deepak C Thank you so much for you kind and prompt reply. It's more than appreciated. Splunk has been setup to extract the logs and get all the needed information from AD event logs including event ID, User ID, etc, etc in order to troubleshoot any problems in ADDC such as user account lockouts etc. The image from my previous question is from a search of the users ID and in this case it pulled eventcode 4776, basically saying the account is locked out? The question is how to I investigate how to get to the root cause and find out what is locking the account out. If you are able to help that would be of great significance as I would like to get the user up and running on Monday without any further problems. Regards.
Hi @Habanero,
I’m a Community Moderator in the Splunk Community.
This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend th...
See more...
Hi @Habanero,
I’m a Community Moderator in the Splunk Community.
This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.
Thank you!
I had a look at that one but i am not really an expert so couldn’t get much idea there. Like Where would be my api credentials reaide and how do i call the api from custom alert action?
BTW, I just noticed, you're testing with the /raw endpoint. If the solutions you're trying to get events from claim to support "native Splunk HEC functionality", they might be trying to post to the ...
See more...
BTW, I just noticed, you're testing with the /raw endpoint. If the solutions you're trying to get events from claim to support "native Splunk HEC functionality", they might be trying to post to the /event endpoint. And if they do it wrong, the input won't accept the data.
As an expansion of @kprior201 's answer - a bit of an explanation. Since DS is the component directly responding to queries from the deployment clients, it maintains and displays the list of clients...
See more...
As an expansion of @kprior201 's answer - a bit of an explanation. Since DS is the component directly responding to queries from the deployment clients, it maintains and displays the list of clients that already "phoned home". But if you restart the DS service, it has to rebuild its database. On the other hand, MC does not interact directly with the deployment clients in any way. It only monitors the _internal index for logs forwarded from all components in your environment. So you might have a situation where some forwarders do phone home and get apps from the DS but cannot properly send their events to the indexer layer.
Come on, flip the pickle, Morty, you're not gonna regret it! Haha! Thanks for the reply. I have not tried actually capturing/sniffing traffic yet, although I'm headed in that direction. As far as ...
See more...
Come on, flip the pickle, Morty, you're not gonna regret it! Haha! Thanks for the reply. I have not tried actually capturing/sniffing traffic yet, although I'm headed in that direction. As far as allowed IPs (for HEC ingestion) I set it to allow all for my testing, so I don't think that's the issue.
I get this error Error in 'delete' command: This command cannot be invoked after the command 'eventstats', which is not distributable streaming. Anything else can i use here? my data i...
See more...
I get this error Error in 'delete' command: This command cannot be invoked after the command 'eventstats', which is not distributable streaming. Anything else can i use here? my data is huge, i cant use a join or subsearch. Thanks in advance @ITWhisperer
Since you obviously can't do a tcpdump on the receiving side and I'm not sure about _internal contents in Cloud, you can either try to observe the traffic on the source side (as you're sending to ...
See more...
Since you obviously can't do a tcpdump on the receiving side and I'm not sure about _internal contents in Cloud, you can either try to observe the traffic on the source side (as you're sending to the Cloud using TLS, you're not gonna see the payload of course but you'll at least be able to see the overall request-response cycle or lack thereof. You can also install a temporary local instance, mirror the configuration and try to test it in unencrypted form to verify if your source systems handle the posting to HEC well. Also I'm not sure if you don't have to enable sending from an allowed set of IPs to be able to receive traffic in Cloud in the first place (but I'm not a Cloud expert, don't quote me on that ;-))
Hmm... This is just a single event? You can's use the starting string to break the events because it appears in the middle of the event as well. So you'd have to go for something like [jlogs]
#Thi...
See more...
Hmm... This is just a single event? You can's use the starting string to break the events because it appears in the middle of the event as well. So you'd have to go for something like [jlogs]
#This one assumes that this is _the_ timestamp for the event.
#Otherwise it needs to be changed to match appropriate part of the event
TIME_PREFIX = Entry\s+\d+\s+starting\sat
#Watch out, this might get messy since you don't have timezone info!
TIME_FORMAT = %d/%m/%Y %H:%M:%S
#This needs to be relatively big (might need tweaking) since the timestamp is
#relatively far down the event's contents
MAX_TIMESTAMP_LOOKAHEAD = 200
#Don't merge lines. It's a performance killer
SHOULD_LINEMERGE=false
#Might need increasing if your events get truncated
TRUNCATE = 10000
NO_BINARY_CHECK = 1
#It's not a well-formed known data format
KV_MODE = none
#We know that each event ends with a line saying "software Completed..."
LINE_BREAKER=(?:[\r\n]+)software\sCompleted\sat\s[^\r\n]+\slocal time([\r\n]+)
#We need the same settings as non-intuitively named EVENT_BREAKER because you
#want the UFs to split your data into chunks in proper places
EVENT_BREAKER=(?:[\r\n]+)software\sCompleted\sat\s[^\r\n]+\slocal time([\r\n]+)
EVENT_BREAKER_ENABLE=true You should put this in props.conf on both your receiving indexer(s)/HF(s) and on your UF ingesting the file.
So I'm unable to get HEC logs into Splunk Cloud (version 9.1.2312.102). When I test the HECs in Postman via: (obviously didn't enter my domain or token for privacy reasons). POST https://http-inpu...
See more...
So I'm unable to get HEC logs into Splunk Cloud (version 9.1.2312.102). When I test the HECs in Postman via: (obviously didn't enter my domain or token for privacy reasons). POST https://http-inputs-mydomain.splunkcloud.com:443/services/collector/raw with the Authorization Header of "Splunk mytoken" It works as expected and I receive a "text":Success , "code": 0 response, which is good. I can also see the event in Splunk when I search it. I did this invidivdually for each HEC that I've created, and they all work....however, whenever I go to setup the actual HECs via the applications I'm trying to integrate...I get nothing. I'm trying to send logs from Dashlane, FiveTran, Knowbe4, and OneTrust. All of these support native Splunk integrations, I enter the information as requested on their external logging setup and nothing shows in Splunk. I'm not sure what to do here. Any guidance would be awesome! Thanks in advance!
The data comes from either the AD server or the Windows servers by the way of the Universal Forwarder, that's the source of the event logs. You have data coming in from the AD server where a UF is ...
See more...
The data comes from either the AD server or the Windows servers by the way of the Universal Forwarder, that's the source of the event logs. You have data coming in from the AD server where a UF is installed and that's how the logs are collected , and the logs are configured by your AD admin, some times they need to enable further logging for advance events. Try these first and see if they exist as they may give you further info you need, if they don't , then it might be worth having a chat with your AD admin to find the exact event ID/log information you need. Event ID 4771 - Kerberos pre-authentication failed. Event ID 644 - User account locked out. Event ID 4625 - An account failed to log on.
Hi All, just started a new role and not been introduced to splunk in any previous jobs, and this is completly new to me. We have a user that is constantly getting account lockout issues. All our D...
See more...
Hi All, just started a new role and not been introduced to splunk in any previous jobs, and this is completly new to me. We have a user that is constantly getting account lockout issues. All our Domain controller security logs etc are extracted into splunk every fifteen minutes. I am attempting to complete a search from the Splunk>enterprise --- New Search field but I can only extract the below information which tells me the user, source, and host and that the user has an Audit failure. Please could someone point me to how I would go about extracting the information of what machine the user is getting the account lock from. I see quite a few messages on the internet but they never say where the actual message should be input from. Is it directly into the New Search field.... Any help would be very much appreciated.
Using these two searches because I want to extract some fields using that regular expression for that only I am appending it. I want help in this only so that I don't repeat this search two times and...
See more...
Using these two searches because I want to extract some fields using that regular expression for that only I am appending it. I want help in this only so that I don't repeat this search two times and have one query in table with fields - total ,success, error, correlationid, GCID etc. Or If I am using wrong query you can suggest me how to proceed - I have that logs and have to count those logs for total ,success and error and these fields will be used if there will be any error to show the details of that error this GCID correlationId will be required. Please guide how can I proceed. Thanks in Advance
@testingtena first identify the missing forwarder by using the below query.
index=_internal source="/opt/splunk/var/log/splunk/metrics.log*" sourcetype="splunkd" fwdType="*" | dedup sourceHost...
See more...
@testingtena first identify the missing forwarder by using the below query.
index=_internal source="/opt/splunk/var/log/splunk/metrics.log*" sourcetype="splunkd" fwdType="*" | dedup sourceHost | rename IPAddress AS hostip, sourceHost AS IPAddress, OS AS fOS | fields IPAddress, hostname, fGUID, fOS, fwdType.
This will list information about connected forwarders based on logs.
there could be an issue with specific configuration files. Here's what to check:
deploymentserver.conf on the deployment server: Ensure the configuration allows communication with UFs.
inputs.conf on the UFs: Verify the stanza forwarding data to the deployment server is correct.