All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @inventsekar, I don't have or want access to the Splunk system or user files. I only have access through the Web UI. The Splunk documentation at https://docs.splunk.com/Documentation/Splunk/9... See more...
Thanks @inventsekar, I don't have or want access to the Splunk system or user files. I only have access through the Web UI. The Splunk documentation at https://docs.splunk.com/Documentation/Splunk/9.2.0/Alert/CronExpressions actually states: You can customize alert scheduling using a time range and cron expression. The Splunk cron analyzer defaults to the timezone where the search head is configured. This can be verified or changed by going to Settings > Searches, reports, and alerts > Scheduled time. But then nowhere below do they explain how to change the timezone for the cron schedule. And when I go to my alert and choose "Advanced Edit", I get a huge page with ~450 fields but nowhere the time zone. There is the field cron_schedule (and next_scheduled_time) but again, no way to change the time zone for the schedule. So I conclude, it's simply not possible.
Thanks. I want to append the IP to the existing lookup  test_MID_IP.csv
Hi @SaintNick , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all t... See more...
Hi @SaintNick , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the Contributors
Start with this.  Adjust the values as necessary.  Have the alert trigger when the number of results is not zero. index=<<index where your perfmon data is stored>> source=disk | where storage_free_... See more...
Start with this.  Adjust the values as necessary.  Have the alert trigger when the number of results is not zero. index=<<index where your perfmon data is stored>> source=disk | where storage_free_percent < <<your desired value>>
The missing information can be the result of one or several missing/wrong configurations either on MC or IDXs, that will depend on the architure you have. So it's important to frame your case. That ... See more...
The missing information can be the result of one or several missing/wrong configurations either on MC or IDXs, that will depend on the architure you have. So it's important to frame your case. That dashboard displays "Search is waiting for input" beacase there are probably missing tokens like the Volume dropdown, right? As an example, my system don't have any volumes defined and so the "Volume" dropdown will not populate, preventing the dashboard from running searches, and thus showing Undefined. Did you setup the Monitoring Console in that Managment Node? Can it "see" all the Indexers in Settings > Distributed Search? Is the info about individual indexes accurate? If you go to MC > Settings > General Setup, does all instances show with correct information? https://docs.splunk.com/Documentation/Splunk/9.2.1/DMC/Configureindistributedmode  
Hi @P_vandereerden  Yes, as per the log pattern there are distinct transaction id's with the ORA-00001 error message. Requirement is to identify all such transactions with the error message. P... See more...
Hi @P_vandereerden  Yes, as per the log pattern there are distinct transaction id's with the ORA-00001 error message. Requirement is to identify all such transactions with the error message. Please suggest.
Hi All, I want to extract email  from json event in splunk. Query I am using is :     index=*sec sourcetype=test | eval tags_json=spath(_raw, "Tag{}"), final_tag_json=json_object() | foreach... See more...
Hi All, I want to extract email  from json event in splunk. Query I am using is :     index=*sec sourcetype=test | eval tags_json=spath(_raw, "Tag{}"), final_tag_json=json_object() | foreach mode=multivalue tags_json [ | eval final_tag_json=json_set(final_tag_json, spath('<<ITEM>>', "Key"), spath('<<ITEM>>', "Value"))] | spath input=final_tag_json | rex field=Email "(?<email>^\w+@abc.com$)"     Raw data :     "Tag": [{"Key": "app", "Value": “test”_value}, {"Key": "key1", "Value": "value1"}, {"Key": "key2", "Value": "value2"}, {"Key": “email”, "Value": “test@abc.com}],     I want email to be mapped to contact when indexed. How can I achieve this ? Please help me Regards, pnv
Thank you. I have succeeded with the effects like this: searching and displaying json file ( I edited inputs.conf in Snort 3 JSON alert app directory)   searching and displaying alert f... See more...
Thank you. I have succeeded with the effects like this: searching and displaying json file ( I edited inputs.conf in Snort 3 JSON alert app directory)   searching and displaying alert full, alert fast file ( I edited inputs.conf for in apps directory) But it does not search with: sourcetype="snort_alert_full", because in this file it changes sourcetypes "snort_alert_full" and "snort_alert_fast" to "snort". Thank you for help.                
Hi @RahulMisra1  the outputlookup command is used to write the lookup file (we can overwrite or append the lookup file) Pls note - this one overwrites the lookup file..  if you want to append, pls ... See more...
Hi @RahulMisra1  the outputlookup command is used to write the lookup file (we can overwrite or append the lookup file) Pls note - this one overwrites the lookup file..  if you want to append, pls let us know..  index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | lookup test_MID_IP.csv test_IP as IP OUTPUT test_IP | eval match=if('IP'== test_IP, "yes", "no") | search match=no | stats count by IP | outputlookup test_MID_IP.csv  
could you please share the alert script or command
Yes, it is possible.  1) Install a Universal Forwarder (UF) on the Windows server 2) Enable the [perfmon://LogicalDisk] input on the UF.  Restart the UF for the change to take effect. 3) Create an... See more...
Yes, it is possible.  1) Install a Universal Forwarder (UF) on the Windows server 2) Enable the [perfmon://LogicalDisk] input on the UF.  Restart the UF for the change to take effect. 3) Create an alert the triggers at the desired value of the % Free Space field.
Hi @SaintNick ...the stackexchange gave this one: https://unix.stackexchange.com/questions/710815/how-do-i-make-cron-use-utc if u r using windows or if the above idea didnt work, if u r looking for... See more...
Hi @SaintNick ...the stackexchange gave this one: https://unix.stackexchange.com/questions/710815/how-do-i-make-cron-use-utc if u r using windows or if the above idea didnt work, if u r looking for a simple short-cut, simply convert the time to UTC time manually and update the cron accordingly. 
Great! Thanks for your help, i had checked the article, but each bucket consist of raw data and tsidx file only. i am asking after the raw data is parsed and normalized, should they stored in somew... See more...
Great! Thanks for your help, i had checked the article, but each bucket consist of raw data and tsidx file only. i am asking after the raw data is parsed and normalized, should they stored in somewhere in the parsed form.
Call out to any Splunk engineer or moderator to answer this simple question!
Thanks Guiseppe, that's exactly what I want to know, how to tell the cron to run in UTC times.
No worries, glad it worked out out   
I guess you can have same auto lookup attribute names inside the same App, that then point to look up files being used.  but causes issues when same inside of another app (I know Splunk for saved sea... See more...
I guess you can have same auto lookup attribute names inside the same App, that then point to look up files being used.  but causes issues when same inside of another app (I know Splunk for saved searches sends a message with same name or duplicate, but I don’t think it does for lookups) So, something like this alert may help | rest splunk_server=local servicesNS/admin/search/data/props/lookups | search attribute=LOOKUP-* | stats count by attribute ```Filter or add ones that are OK as they may be other attributes that use similar lookups in the same App context``` ```| search NOT attribute="LOOKUP-my_ok_lookup1" NOT attribute="LOOKUP- my_ok_lookup2"``` | eval duplicate=if(count > 1, "Yes", "No") | where count > 1   You can then find out, explore if there are other apps that use the same name attribute: Example in your case eventcode | rest splunk_server=local servicesNS/admin/search/data/props/lookups | search attribute=LOOKUP-eventcode   Have play and see if this helps.    
How we can configure disk space alert using Splunk . is it possible 
How i update the test_MID_IP.csv  with the output IP, so that next time it runs with updated list index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | loo... See more...
How i update the test_MID_IP.csv  with the output IP, so that next time it runs with updated list index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | lookup test_MID_IP.csv test_IP as IP OUTPUT test_IP | eval match=if('IP'== test_IP, "yes", "no") | search match=no | stats count by IP
I had defined the complete path in inputs.conf and restarted the Splunkforwarder but got error in Splunkd logs. Kindly refer the attachment.