All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Essentially, data is returned from the selected time range, if there is no data, what time range do you want to use? You could do something like this   <your search> | appendpipe [| stats count ... See more...
Essentially, data is returned from the selected time range, if there is no data, what time range do you want to use? You could do something like this   <your search> | appendpipe [| stats count as _count | where _count = 0 | where isnull(_count) | append [| search <your index> [| metasearch index=<your index> earliest=0 | head 1 | rename _time as earliest | fields earliest] ] ]  
Hi @maayan, did you explored the Alert Manager App (https://splunkbase.splunk.com/app/2665)? Try it, I usually use it when I cannot use ES. Put attention only to one point: the app can see only al... See more...
Hi @maayan, did you explored the Alert Manager App (https://splunkbase.splunk.com/app/2665)? Try it, I usually use it when I cannot use ES. Put attention only to one point: the app can see only alerts with a Global sharing. Ciao. Giuseppe
Hello,  I have seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data.  I would like to create a line chart using pointlist values - it contains tim... See more...
Hello,  I have seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data.  I would like to create a line chart using pointlist values - it contains timestamp in epoch and CPU% Search I tried but not working as expected to extract this data: index="splunk_test" source="test.json" | spath output=pointlist path=series{}.pointlist{}{} | mvexpand pointlist | table pointlist Please see below sample json. {"status": "ok", "res_type": "time_series", "resp_version": 1, "query": "system.cpu.idle{*}", "from_date": 1698796800000, "to_date": 1701388799000, "series": [{"unit": [{"family": "percentage", "id": 17, "name": "percent", "short_name": "%", "plural": "percent", "scale_factor": 1.0}, null], "query_index": 0, "aggr": null, "metric": "system.cpu.idle", "tag_set": [], "expression": "system.cpu.idle{*}", "scope": "*", "interval": 14400, "length": 180, "start": 1698796800000, "end": 1701388799000, "pointlist": [[1698796800000.0, 67.48220718526889], [1698811200000.0, 67.15981521730248], [1698825600000.0, 67.07217666403122], [1698840000000.0, 64.72434584884627], [1698854400000.0, 64.0411289094932], [1698868800000.0, 64.17585938553243], [1698883200000.0, 64.044969119166], [1698897600000.0, 63.448143595246194], [1698912000000.0, 63.80226399404451], [1698926400000.0, 63.93216493520908], [1698940800000.0, 63.983679174088145], [1701331200000.0, 63.3783379315815], [1701345600000.0, 63.45321248782884], [1701360000000.0, 63.452383398041064], [1701374400000.0, 63.46314971048991]], "display_name": "system.cpu.idle", "attributes": {}}], "values": [], "times": [], "message": "", "group_by": []} can you please help how I can achieve this? Thank you. Regards, Madhav
| timechart span=2h count(Name) by machine
Hi, i need to find a way to present all alerts in a dashboard(Classic/Studio). users don't want to get mail for each alert, they prefer to see (maybe in a table ) all the alerts in one page + the al... See more...
Hi, i need to find a way to present all alerts in a dashboard(Classic/Studio). users don't want to get mail for each alert, they prefer to see (maybe in a table ) all the alerts in one page + the alert's last result. and maybe to click on the alert and get the last search. is it possible to create an alerts dashboard? thanks, Maayan
Hi @Praz_123  all looks good so far..  the alert trigger actions.. did you enabled the email notification and provided your email id?
Hi @jip31 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi, I need to find all time_interval for each machine where there is no data (no row for Name) . (to goal is to create an alert if there was no data in a time interval for a machine) for example... See more...
Hi, I need to find all time_interval for each machine where there is no data (no row for Name) . (to goal is to create an alert if there was no data in a time interval for a machine) for example, if we look at one day and machine X. if there was data in time interval 8:00-10:00, 10:00-12:00. I need to return X and the rest of the interval (12:00-1:00,1:00-2:00,..) i wrote the following command:  | chart count(Name) over machine by time_interval i get a table with all interval and machines. cell=0 if there is no data. i want to return all cell =0 (i need the interval and machine where cell=0) but i didn't succeed. i also tried to save the query and do left join but it doenst work. it's a very simple mission, some can help me with that? thanks, Maayan
Query should return last/latest available data when there is no data for the selected time range
Hi @jalbarracinklar , if you install the app downloaded from Splunk Coud, you don't need the following add forward-server command, because the app alreacy has all the information to connect to Splun... See more...
Hi @jalbarracinklar , if you install the app downloaded from Splunk Coud, you don't need the following add forward-server command, because the app alreacy has all the information to connect to Splunk Cloud. One hint: when I have this kind od requisites, I prefer to have two Hevy Forwarders in my on premise infrastructure that are the concentrators of the logs from all the on-premise systems, in this way I have to open the connection to Splunk Cloud only for these two systems. Then, if you have many Universal Forwarders, use a Deployment Server to deploy apps to them, dont manage them manually. About Ubuntu, I read reporting of many issues in Community, be sure about the grants to run Splunk and to access files to read. In addition, check if you are receivig logs from Ubuntu servers: if yes, the issue is in the monitor stanzas, if not in the connection. Ciao. Giuseppe
I'm not sure what you have (You're a bit vague on the details it's best if you show what you have in terms of data - post some (anonymized if nedded) examples of your events). But if you're trying... See more...
I'm not sure what you have (You're a bit vague on the details it's best if you show what you have in terms of data - post some (anonymized if nedded) examples of your events). But if you're trying to strptime() on a string without a timezone information embedded in it - it's also gonna be done in your local timezone. So if you want to interpret a timestamp from another zone you'd need to manually add that timezone information to the time string. But it's still best if you show us your data - we'll see what you have and what can be done with it
I am trying to set up custom user data to capture user id of the user using Ajax request payload via HTTP method and URL method but unable to execute with both methods and it is not showing up in the... See more...
I am trying to set up custom user data to capture user id of the user using Ajax request payload via HTTP method and URL method but unable to execute with both methods and it is not showing up in the Network tab under developer tools. Could someone help me what could be the issue?
Hi @jip31, I try to answer to your questions: Q: Is it better to use a real-time schedule or a continuous schedule? A: never use a Real Time search, because in Splunk a searche  takes a CPU (more ... See more...
Hi @jip31, I try to answer to your questions: Q: Is it better to use a real-time schedule or a continuous schedule? A: never use a Real Time search, because in Splunk a searche  takes a CPU (more then one if you also have subsearches) and release it when finishes, but a RT search dosn't finish, so with few RTb searches you could use all your resources, it's always better a scheduled search. Q: Is it necessary to fill the time range (start time and end time)? A: yes, always and I usually use the same time of the running frequency: e.g. if I have a search running every hour I use earliest=-60m@m and latest=@m; eventually you can delay your search if you know that you have some logs that are delayed indexed, in this case, you could use earliest=-70m@m and latest=-10m@m. Q: if an alert event exists, does it means that this event will be created many times in the incident review dashboard? A: Yes, you could eventually use a grouping command (as stats or tstats) and threshold to group more events. In ES, you could also use the Risk Analysis Action, instead the Notable Action to reduce the number of the Notables: in this way, every occurrance of the alert doesn't create a Notable, but it increments the Risk Score for that asset or identity, and the Notable will be created when the Risk Score will exceed the Risk Score threshold, in this way you create a Notable later, but you reduce the number of Notables to analyze, this is obviously possible only for alerts frequently triggered. Ciao. Giuseppe
Hello Again, I created new index on one of heavy forwarders, how to make this index reflect in all other heavy forwarders ?
Thank you for your answer. I have set the timezone CET in my settings of SPLUNK, but the time I get from database is UTC. How I have to format the time that it is shown in my timezone? Best regards... See more...
Thank you for your answer. I have set the timezone CET in my settings of SPLUNK, but the time I get from database is UTC. How I have to format the time that it is shown in my timezone? Best regards Dana   
Later we have used an app named decrypt2 and it worked for us with this syntax:   | decrypt field=randomfield atob emit('randomfielddecrypt')
Check the logs for connectivity issues. 
This worked for me too
@inventsekar   search query :- index="abc" sourcetype="abcd_logs" host="AbdP" Error OR Exception OR iisnode OR ERROR  the alert trigger conditions...  is greater than 0,  Expires = 24 hours. Cro... See more...
@inventsekar   search query :- index="abc" sourcetype="abcd_logs" host="AbdP" Error OR Exception OR iisnode OR ERROR  the alert trigger conditions...  is greater than 0,  Expires = 24 hours. Cron : */5 * * * * , Time Range : Last 30 minutes. PS: can't share the screenshot here. 
Hi I have to create correlation searches in Splunk ES My cron schedule will be */60**** Is it better to use a real-time schedule or a continuous schedule? Is it necessary to fill the time range (... See more...
Hi I have to create correlation searches in Splunk ES My cron schedule will be */60**** Is it better to use a real-time schedule or a continuous schedule? Is it necessary to fill the time range (start time and end time)? Last question : if an alert event exists, does it means that this event will be created many times in the incident review dashboard? I need to creat just an incident for the same alert. How to do this? Thanks in advance