All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Praz_123  all looks good so far..  the alert trigger actions.. did you enabled the email notification and provided your email id?
Hi @jip31 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi, I need to find all time_interval for each machine where there is no data (no row for Name) . (to goal is to create an alert if there was no data in a time interval for a machine) for example... See more...
Hi, I need to find all time_interval for each machine where there is no data (no row for Name) . (to goal is to create an alert if there was no data in a time interval for a machine) for example, if we look at one day and machine X. if there was data in time interval 8:00-10:00, 10:00-12:00. I need to return X and the rest of the interval (12:00-1:00,1:00-2:00,..) i wrote the following command:  | chart count(Name) over machine by time_interval i get a table with all interval and machines. cell=0 if there is no data. i want to return all cell =0 (i need the interval and machine where cell=0) but i didn't succeed. i also tried to save the query and do left join but it doenst work. it's a very simple mission, some can help me with that? thanks, Maayan
Query should return last/latest available data when there is no data for the selected time range
Hi @jalbarracinklar , if you install the app downloaded from Splunk Coud, you don't need the following add forward-server command, because the app alreacy has all the information to connect to Splun... See more...
Hi @jalbarracinklar , if you install the app downloaded from Splunk Coud, you don't need the following add forward-server command, because the app alreacy has all the information to connect to Splunk Cloud. One hint: when I have this kind od requisites, I prefer to have two Hevy Forwarders in my on premise infrastructure that are the concentrators of the logs from all the on-premise systems, in this way I have to open the connection to Splunk Cloud only for these two systems. Then, if you have many Universal Forwarders, use a Deployment Server to deploy apps to them, dont manage them manually. About Ubuntu, I read reporting of many issues in Community, be sure about the grants to run Splunk and to access files to read. In addition, check if you are receivig logs from Ubuntu servers: if yes, the issue is in the monitor stanzas, if not in the connection. Ciao. Giuseppe
I'm not sure what you have (You're a bit vague on the details it's best if you show what you have in terms of data - post some (anonymized if nedded) examples of your events). But if you're trying... See more...
I'm not sure what you have (You're a bit vague on the details it's best if you show what you have in terms of data - post some (anonymized if nedded) examples of your events). But if you're trying to strptime() on a string without a timezone information embedded in it - it's also gonna be done in your local timezone. So if you want to interpret a timestamp from another zone you'd need to manually add that timezone information to the time string. But it's still best if you show us your data - we'll see what you have and what can be done with it
I am trying to set up custom user data to capture user id of the user using Ajax request payload via HTTP method and URL method but unable to execute with both methods and it is not showing up in the... See more...
I am trying to set up custom user data to capture user id of the user using Ajax request payload via HTTP method and URL method but unable to execute with both methods and it is not showing up in the Network tab under developer tools. Could someone help me what could be the issue?
Hi @jip31, I try to answer to your questions: Q: Is it better to use a real-time schedule or a continuous schedule? A: never use a Real Time search, because in Splunk a searche  takes a CPU (more ... See more...
Hi @jip31, I try to answer to your questions: Q: Is it better to use a real-time schedule or a continuous schedule? A: never use a Real Time search, because in Splunk a searche  takes a CPU (more then one if you also have subsearches) and release it when finishes, but a RT search dosn't finish, so with few RTb searches you could use all your resources, it's always better a scheduled search. Q: Is it necessary to fill the time range (start time and end time)? A: yes, always and I usually use the same time of the running frequency: e.g. if I have a search running every hour I use earliest=-60m@m and latest=@m; eventually you can delay your search if you know that you have some logs that are delayed indexed, in this case, you could use earliest=-70m@m and latest=-10m@m. Q: if an alert event exists, does it means that this event will be created many times in the incident review dashboard? A: Yes, you could eventually use a grouping command (as stats or tstats) and threshold to group more events. In ES, you could also use the Risk Analysis Action, instead the Notable Action to reduce the number of the Notables: in this way, every occurrance of the alert doesn't create a Notable, but it increments the Risk Score for that asset or identity, and the Notable will be created when the Risk Score will exceed the Risk Score threshold, in this way you create a Notable later, but you reduce the number of Notables to analyze, this is obviously possible only for alerts frequently triggered. Ciao. Giuseppe
Hello Again, I created new index on one of heavy forwarders, how to make this index reflect in all other heavy forwarders ?
Thank you for your answer. I have set the timezone CET in my settings of SPLUNK, but the time I get from database is UTC. How I have to format the time that it is shown in my timezone? Best regards... See more...
Thank you for your answer. I have set the timezone CET in my settings of SPLUNK, but the time I get from database is UTC. How I have to format the time that it is shown in my timezone? Best regards Dana   
Later we have used an app named decrypt2 and it worked for us with this syntax:   | decrypt field=randomfield atob emit('randomfielddecrypt')
Check the logs for connectivity issues. 
This worked for me too
@inventsekar   search query :- index="abc" sourcetype="abcd_logs" host="AbdP" Error OR Exception OR iisnode OR ERROR  the alert trigger conditions...  is greater than 0,  Expires = 24 hours. Cro... See more...
@inventsekar   search query :- index="abc" sourcetype="abcd_logs" host="AbdP" Error OR Exception OR iisnode OR ERROR  the alert trigger conditions...  is greater than 0,  Expires = 24 hours. Cron : */5 * * * * , Time Range : Last 30 minutes. PS: can't share the screenshot here. 
Hi I have to create correlation searches in Splunk ES My cron schedule will be */60**** Is it better to use a real-time schedule or a continuous schedule? Is it necessary to fill the time range (... See more...
Hi I have to create correlation searches in Splunk ES My cron schedule will be */60**** Is it better to use a real-time schedule or a continuous schedule? Is it necessary to fill the time range (start time and end time)? Last question : if an alert event exists, does it means that this event will be created many times in the incident review dashboard? I need to creat just an incident for the same alert. How to do this? Thanks in advance  
Nice example @dtburrows3 ! You can always get more than the 1500 in the example with stats values to avoid the stats list limit of 100 and prefix the rand with the row index #, e.g. start the search... See more...
Nice example @dtburrows3 ! You can always get more than the 1500 in the example with stats values to avoid the stats list limit of 100 and prefix the rand with the row index #, e.g. start the search with | makeresults count=1500 | streamstats c | eval low=1, high=100, rand=printf("%05d:%d", c, round(((random()%'high')/'high')*('high'-'low')+'low')) | stats values(rand) as nums | eval cnt=0 | rex field=nums max_match=0 "(?<x>[^:]*):(?<nums>\d+)" | fields - x c ...  
You can't replace the lat/long, but you can add country to the log_subtype, i.e. | iplocation client_ip | eval type=log_subtype." (".Country.")" | geostats count by type
@inventsekar The page below is likewise broken. I tried the URL you suggested. That indicates that the problem is with all the other folders where the macro is resistant, right?  
Hi All, I have created a detector that monitors splunk environment. I am trying to customize message in alert message and trying to pass {{dimensions.AWSUniqueId}}. When the alert notification is ... See more...
Hi All, I have created a detector that monitors splunk environment. I am trying to customize message in alert message and trying to pass {{dimensions.AWSUniqueId}}. When the alert notification is sent, this variable is empty. Can anyone please let me know why is this happening. Regards, PNV
Hi All, I recently installed/configured the "Microsoft Teams Add-on for splunk" to ingest call logs and meeting info from Microsoft Teams. I have run into an isuue I was hoping someone could help wi... See more...
Hi All, I recently installed/configured the "Microsoft Teams Add-on for splunk" to ingest call logs and meeting info from Microsoft Teams. I have run into an isuue I was hoping someone could help with me. [What I would like to do] Ingesting call logs and meeting info from Microsoft Teams via "Microsoft Teams Add-on for splunk". [What I did] I have followed the instructions and configured the "Subscription", "User Reports", "Call Reports" and "Webhook". Instructions:https://www.splunk.com/en_us/blog/tips-and-tricks/splunking-microsoft-teams-data.html [issue]"User Reports" and "Webhooks" has worked, but "Subscription" and " Call reports" has not worked. As a results, Teams logs are not ingested. I have granted all of the required permissions in Teams/Azure based on the instructions. [error logs] I checked the internal logs and detected many error logs, but reading the errors did not reveal a clear cause. Among the logged problems indicated were the following: From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/TA_MS_Teams_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA_MS_Teams#configs/conf-ta_ms_teams_settings, user=proxy. message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" 400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions [environment] Add-On Version: 1.1.3 Splunk Enterprise Verison: 9.1.2 Add-On is installed on a Splunk Enterprise. Is the error in the error log due to the call log and subscriptions not working properly? Or does the webhook URL have to be https to work properly? If anyone knows the reason, let me know. Any help would be greatly appreciated. Thanks,