All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I will test it again later today and let you know if I then see an error.  I know the library I wanted to bring in splunklib did not exist.
We don't know what's going on, either.  What makes you think it's "erroring out" if there are no errors? You may have to add some debugging code to the script so it tells you more about what is happ... See more...
We don't know what's going on, either.  What makes you think it's "erroring out" if there are no errors? You may have to add some debugging code to the script so it tells you more about what is happening.
Unfortunately it doesn't give me one.  So I'm not positive what's going on.
All the timestamps in the JSON we receive are UTC, but the TA ignores the time zone in the ISO 8601 string, so it defaults to local time. Thus, all our events are timestamped several hours into the f... See more...
All the timestamps in the JSON we receive are UTC, but the TA ignores the time zone in the ISO 8601 string, so it defaults to local time. Thus, all our events are timestamped several hours into the future. I noticed that the timestamps Google provides vary from millisecond to nanosecond precision, but trailing zeros are truncated before the "Z" is tacked on. This makes it difficult to specify a time format with a trailing time zone that will work for every event. But instead, shouldn't all the source types have TZ = UTC in props? Am I the only one with this problem?
What is the error?
Hello @gcusello , The data is already being ingested into Splunk, and if I look at events from the last 10 minutes (index="my-index" earliest=-10m@m latest=@m), the syslog messages from ALL machines... See more...
Hello @gcusello , The data is already being ingested into Splunk, and if I look at events from the last 10 minutes (index="my-index" earliest=-10m@m latest=@m), the syslog messages from ALL machines are showing up as a single event timestamp.  So, I need to compare the sync-time field, which is is epoch, rather than the _time value assigned by Splunk.   Thank you for your assistance.
Hi @richgalloway . , thank you it worked. I have one more question is there any way I can restrict events in splunk For example From the above query if I get 10 same logs in 1 hour How can I writ... See more...
Hi @richgalloway . , thank you it worked. I have one more question is there any way I can restrict events in splunk For example From the above query if I get 10 same logs in 1 hour How can I write a query to fetch only 5 records in 1 hour
Sorry, typo in field names. index=mssql sourcetype=SQL_Query source=Sales_Contracts_Activations* OR source=Sales_Contracts_Activations_BOM OR (source=Esigns CALLBACK_STATUS="SUCCESS" STATUS=Complete... See more...
Sorry, typo in field names. index=mssql sourcetype=SQL_Query source=Sales_Contracts_Activations* OR source=Sales_Contracts_Activations_BOM OR (source=Esigns CALLBACK_STATUS="SUCCESS" STATUS=Complete) | eval query_source=if(source="Esigns", "query2", "query1") | stats count(eval(query_source="query1")) as count1 count(eval(query_source="query2")) as count2 | eval diff=count1-count2
Hi @sarge338 , let me understand: you have three syslog sources to ingest in Splunk, and then you whould compare events from the three sources, is it correct? if this is your requirements you shoul... See more...
Hi @sarge338 , let me understand: you have three syslog sources to ingest in Splunk, and then you whould compare events from the three sources, is it correct? if this is your requirements you should follow these preliminary steps to ingest data (if you already ingested data jump these steps): identify the data type (technology, model, type of data), identify IP address, protocol and port of each source, identify the correct Add-on to parse these data source, put in listening your heavy forwarder on the defined ports and protocols, configure your sources to send logs to the heavy forwarder using the defined protocol and port, configure the input on heavy forwarder assigning the correct sourcetype (based on the choosed Add-On) and the correct index, the host is automatically assigned using the IP address. then in your Splunk you can run a search like the following (Not having any information on data sources I cannot be detailed and I could be vague): index=your_index host IN (M1, M2, M3) | stats dc(host) AS host_count BY _time | where host_count=3 if the timestamps must be exactly the same, if instead they must be similar (e.g. 5 minutes ranges), you could run: index=your_index host IN (M1, M2, M3) | bin span=5m _time | stats dc(host) AS host_count BY _time | where host_count=3 In this way you have the events with the same timestamp in all the hosts, if you want a different condition, you have to modify the final where command. Ciao. Giuseppe  
Good Morning! I rarely get to dabble in SPL, and as such, some (probably simple) things stump me.  That is what brought me here today. I have a scenario in which I need to pull SYSLOG events from a... See more...
Good Morning! I rarely get to dabble in SPL, and as such, some (probably simple) things stump me.  That is what brought me here today. I have a scenario in which I need to pull SYSLOG events from a series of machines that all report the field names.  One of those machines is the authoritative source of values, which all of the other systems should have.  As an example, I have 3 machines... M1, M2, M3, and each machine reports three field/value pairs... sync-timestamp, version-number, machine-name. I need to compare the sync-timestamp of M1 with the sync-timestamp of the other two machines.  My idea is to assign the "sync-timestamp value WHERE computer-name=M1" to a variable by which to compare the other two machines' values.  I intend to use this report to ultimately create an alert, so we know if machines are not syncing properly. I just cannot figure out the syntax to make this happen.  Can anyone provide some guidance on this? Thank you in advance!
Hi @AL3Z, if you want to remove only a part of events, you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Anonymizedata you should insert in your props.c... See more...
Hi @AL3Z, if you want to remove only a part of events, you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Anonymizedata you should insert in your props.conf  one SEDCMD-<class> = y/<string1>/<string2>/g, using my above regex: SEDCMD-remove_strings = s/Data Name\=\'ParentProcessName\'\>C:\\Program Files\\(Windows Defender Advanced Threat Protection\\MsSense\.exe)|(Windows Defender Advanced Threat Protection\\SenseIR\.exe)|(AzureConnectedMachineAgent\\GCArcService\\GC\\gc_worker\.exe)|(Rapid7\\Insight Agent\\components\\insight_agent\\3\.2\.5\.31\\ir_agent\.exe)//g  Ciao. Giuseppe
Hi @loganramirez , at first the limit of subsearch results is 50,000 not 10,000. Anyway, you could use stats to correlate searches, something like this: index="my_data" resourceId="sip*" ("CONNECT... See more...
Hi @loganramirez , at first the limit of subsearch results is 50,000 not 10,000. Anyway, you could use stats to correlate searches, something like this: index="my_data" resourceId="sip*" ("CONNECTED" OR "ENDED") | eval status=if(searchmatch("CONNECTED"),"CONNECTED","ENDED") | stats dc(status) AS status_count values(status) AS status values(meta) AS meta last(timestamp) AS timestamp BY guid | where status_count=1 AND status="CONNECTED" Ciao. Giuseppe
I have the following script, but it keeps erroring out. def connect_to_splunk(username,password,host='http://xxxxxxxx.splunkcloud.com',port='8089',owner='admin',app='search',sharing='user'     t... See more...
I have the following script, but it keeps erroring out. def connect_to_splunk(username,password,host='http://xxxxxxxx.splunkcloud.com',port='8089',owner='admin',app='search',sharing='user'     try:         service=client.connect(username=username,password=password,host=host,port=port,owner=owner,app='search',sharing=sharing)         if splunk_service:             print("Splunk login successful")             print("......................" )     except Exception as e:         print(e) def main():     try:         splunk_service = connect_to_splunk(username='xxxxxx',password='xxxxxxx')     except Exception as e:         print(e)     There is no error from the debugger (Using Visual Studio).  Would appreciate any assistance.
Greetings. I'm trying to count all calls in this: index="my_data" resourceId="sip*" "CONNECTED" Where not in this: index="my_data" resourceId="sip*" "ENDED" This works when the latter is <1... See more...
Greetings. I'm trying to count all calls in this: index="my_data" resourceId="sip*" "CONNECTED" Where not in this: index="my_data" resourceId="sip*" "ENDED" This works when the latter is <10k (subsearch)   index="my_data" resourceId="sip*" "CONNECTED" NOT [ search index="my_data" resourceId="sip*" "ENDED" | table guid ]   And I can use a join for more than >10k because the TOTAL is not 10k (join limits)   index="my_data" resourceId="sip*" "CONNECTED" | table guid meta | join type=left guid [ search index="my_data" resourceId="sip*" "ENDED" | table guid timestamp ] | search NOT timestamp="*"    But neither 'feel' great. I'm making my way through the PDF found here but not figured out 'the best' way to do this (if such a thing exists). https://community.splunk.com/t5/Splunk-Search/how-to-exclude-the-subsearch-result-from-main-search/m-p/572567 So while there are several questions related to 'excluding subsearch' results, I have not found many that help with this 10k issue (subsearch results more than 10k and a join works, as long as my total values is less than 10k). PLUS - joins are kinda sucky, amirite?  I mean, that's like what the first things that Nick Mealy says in that pdf. So just looking for more options to try and learn! Thank you!  
Thanks @ITWhisperer  I can see the values in the query1 and query2 but count1 count2 diff are all showing as 0
Windows domain controller Server not reporting win security events in Splunkcloud We have a Windows Server acting as a Domain Controller, the Splunk forwarder is installed on this server and it forw... See more...
Windows domain controller Server not reporting win security events in Splunkcloud We have a Windows Server acting as a Domain Controller, the Splunk forwarder is installed on this server and it forwards to our local onpremise Heavy Forwarder which then uploads to Splunk cloud. The Windows domain controller server in question is displaying Windows event logs for application and system but not for security. So it is partially working but somehow the security events are not making it to the cloud. However it was working completely working 100% fine before, and later stopped working. Around the time it stopped working, we added it to msad (for domain controller specific inputs) but did not make any other changes. 
Another unique field to extract is the Operation performed: \:\s+[A-Z]+\s+(?<Operation>[*\s]+) You can test on a report with: | rex field=_raw "\:\s+[A-Z]+\s+(?<Operation>[*\s]+)" You can use the... See more...
Another unique field to extract is the Operation performed: \:\s+[A-Z]+\s+(?<Operation>[*\s]+) You can test on a report with: | rex field=_raw "\:\s+[A-Z]+\s+(?<Operation>[*\s]+)" You can use the operation event results to narrow down on the type events you want to search. For the event message portion, you can use: (?ms)([A-Z].*\s-\s+(?<Message>.*) You can test on a report with: | rex field=_raw "(?ms)([A-Z].*\s-\s+(?<Message>.*)" (?ms) is used to display multiple lines of a message. Example would be Java messages.   Thanks, Joe
We have set up cluster monitoring for K8s cluster to monitor when pods get killed, failed, etc. The Alert we get looks like the following: We have hundreds of pods and namespaces in the cluster ... See more...
We have set up cluster monitoring for K8s cluster to monitor when pods get killed, failed, etc. The Alert we get looks like the following: We have hundreds of pods and namespaces in the cluster and I would like the alert summary to contain namespace and pod name, otherwise I don't know if it's something I can ignore or not. For example is someone is in a dev namespace testing a new app the pod alerts might go off a lot and there is not way to know if this is a production namespace/pod without going into the app. And When I wake up in the morning with 100 email alerts for pods failing i don't know if prod id falling over or someone set up a new namespace to test... It doesn't seem to be possible to get the pod name, even though it's available in the dashboard view of the event: All I want is Email/HTML template to include the namespace and Pod name, but that doesn't seem to be possible today.
From what I see, the vizportal_node-0.log seems to be systems/audit log containing events needed to determine event failures. The log events are not easily structured for Splunk to easily extract fie... See more...
From what I see, the vizportal_node-0.log seems to be systems/audit log containing events needed to determine event failures. The log events are not easily structured for Splunk to easily extract fields based on the display of the event information. There are lots of results within the log file without any reference to the meaning of the result. You will need to build a lot of rex commands to extract fields correctly. For example, you can test a failed login attempt and then read the log file. If you already have the vizportal_node-0.log events forwarded to Splunk, you can run a search within Splunk for your failed login attempt using _raw=ERROR. The log level of an event seems to be unique and can be extracted to a field. I was able to save the below rex command for determining the log level of Tableau logs. (?<log_level>ERROR|INFO|EMERG|ALERT|CRIT|WARN|NOTICE|DEBUG) If you want to test the above rex command on a report, you can use: | rex field=_raw "(?<log_level>ERROR|INFO|EMERG|ALERT|CRIT|WARN|NOTICE|DEBUG)" Based on the report you are building, you may need to build custom rex commands to meet your needs for the report.   Thanks, Joe