All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone, we use LDAPS in Splunk to allow our employees to log in to the system (Search Heads). Is there a way for users to change their passwords when needed or after they have expired? Some u... See more...
Hello everyone, we use LDAPS in Splunk to allow our employees to log in to the system (Search Heads). Is there a way for users to change their passwords when needed or after they have expired? Some users only access Splunk and do not have any other means to update their passwords.
| appendpipe [| stats min(_time) as _time | eval event="min"]
Hi at all, I have to configure a multisite Indexer Cluster and I have a dubt: in the Splunk architectig course, the indicated Indexer Cluster replication port was 9100. Then reading Multisite Inde... See more...
Hi at all, I have to configure a multisite Indexer Cluster and I have a dubt: in the Splunk architectig course, the indicated Indexer Cluster replication port was 9100. Then reading Multisite Indexer Cluster documentation the indicated port is 9887. What's the correct one? Can I use 9100 instead 9887 or 9100 is dedicated to other purposes? Thank you for your support. Ciao. Giuseppe
@Ragamonster you will need to use REST to find the task you want to add the note to and then POST the note to that task. https://docs.splunk.com/Documentation/SOARonprem/6.1.1/PlatformAPI/RESTNote... See more...
@Ragamonster you will need to use REST to find the task you want to add the note to and then POST the note to that task. https://docs.splunk.com/Documentation/SOARonprem/6.1.1/PlatformAPI/RESTNotes  Specifically look at the below: You can do this using the HTTP app but I prefer using the sessions API as it's pre-authenticated and gives you a lot more control: https://docs.splunk.com/Documentation/SOARonprem/6.1.1/PlaybookAPI/SessionAPI  -- Hope this helps. If so please mark as a solution for future readers. Happy SOARing! --
Hi, I am having this same issue at the moment as the domain i manage is completely airgapped form the internet so no cloud connectivity. After some digging i found have read there are events in the ... See more...
Hi, I am having this same issue at the moment as the domain i manage is completely airgapped form the internet so no cloud connectivity. After some digging i found have read there are events in the event viewer. Applications and Services Logs > Microsoft > Windows > Windows Defender > Operational 1116 - MALWAREPROTECTION_STATE_MALWARE_DETECTED 1117 - MALWAREPROTECTION_STATE_MALWARE_ACTION_TAKEN 1118 - MALWAREPROTECTION_STATE_MALWARE_ACTION_FAILED 1119 - MALWAREPROTECTION_STATE_MALWARE_ACTION_CRITICALLY_FAILED I haven't tested them yet as i have literally just found them online this minute and came across this message board at the same time.  I hope this helps and if you have found anything extra can you put them in here too. Im going set up the forwarder now to collect these and create a dashboard  KR Richard 
Try something like this on the original (unedited) field | rex field=MSADChangedAttributes max_match=0 "(?m)(?<Changed>^[^-]*$)"
If anybody still facing this issues and could not figured out the solution. In my case, I had to change view type.  You can see here there are three options to choose Raw, List, Table. If you w... See more...
If anybody still facing this issues and could not figured out the solution. In my case, I had to change view type.  You can see here there are three options to choose Raw, List, Table. If you want to set JSON syntax highlight by default, you should choose List view.  
Hello ttovarzoll, Thank you for providing your solutions. Unfortunately it doesn't work in all cases as showed in the following screenshots where the 'User Account Control' is filled. I can image th... See more...
Hello ttovarzoll, Thank you for providing your solutions. Unfortunately it doesn't work in all cases as showed in the following screenshots where the 'User Account Control' is filled. I can image that this is also the case for other fields. Did you came across this issue and do you perhaps have an solution for this?     Kind regards, Jos
The only way volunteers can help you concretely is for you to post sample or mock data (anonymize as needed) in text, illustrate desired results (in text), then explain the logic between illustrated ... See more...
The only way volunteers can help you concretely is for you to post sample or mock data (anonymize as needed) in text, illustrate desired results (in text), then explain the logic between illustrated data and results.  Forget Splunk.  What would you be looking for in the data you illustrate to determine status by PID?  What does "status of process based on PID" even mean? Do you mean listing status of process grouped by PID? (Splunk and many data query languages call this group-by.)
Have you looked up Create a CSV lookup definition?  You can define a field as match type CIDR.  The question is extremely vague.  If you want concrete help, illustrate mock data, desired results, and... See more...
Have you looked up Create a CSV lookup definition?  You can define a field as match type CIDR.  The question is extremely vague.  If you want concrete help, illustrate mock data, desired results, and explain the logic between data and desired results.
Hi @Praz_123, what's your question? This message says that you haven't enough disk space on the partition where you have indexed (by default $SPLUNK_HOME/var/lib/splunk) so indexing was stopped. T... See more...
Hi @Praz_123, what's your question? This message says that you haven't enough disk space on the partition where you have indexed (by default $SPLUNK_HOME/var/lib/splunk) so indexing was stopped. To solve this issue you have to free space (e.g. deleting splunk logs from $SPLUNK_HOME/var/log/splunk) or (better) adding more disk space to your file system. Ciao. Giuseppe
Very smart! Thanks!
ID: rb.splunk-es.abc.com:/dev/mapper/vg_data-lv_data_opt:os_high_disk_utilization - rb.splunk-es.abc.com - High Priority - Low disk space on /data/opt at 2.00% free
I came across of running a custom python script in Splunk on the triggered events by adding the run a script action but I don't know how to do it. As the alerts are visible on Splunk I want to run a ... See more...
I came across of running a custom python script in Splunk on the triggered events by adding the run a script action but I don't know how to do it. As the alerts are visible on Splunk I want to run a script and extract those triggered alerts by running a script.
You cannot use wildcard group in eval.  Use foreach to iterate. | foreach test-*.traffics [eval <<FIELD>> = round('<<FIELD>>' / 1024, 2)]
Hi All, i am using below search to monitor a status of process based on PID and usage  we have tried by stopping the service ,PID got changed how we can determine when it stopped, when using below... See more...
Hi All, i am using below search to monitor a status of process based on PID and usage  we have tried by stopping the service ,PID got changed how we can determine when it stopped, when using below search not getting OLD PID in the table, which was showing latest how can modify  index=Test1 host="testserver" (source=ps COMMAND=*cybAgent*) | stats latest(cpu_load_percent) as "CPU %", latest(PercentMemory) as "MEM %", latest(RSZ_KB) as "Resident Memory (KB)", latest(VSZ_KB) as "Virtual Memory (KB)",latest(PID) as "PID" ,latest(host) as "host" by COMMAND | eval Process_Status = case(isnotnull('CPU %') AND isnotnull('MEM %'), "Running", isnull('CPU %') AND isnull('MEM %'), "Not Running", 1=1, "Unknown") | table host,"CPU %", "MEM %", "Resident Memory (KB)", "Virtual Memory (KB)", Process_Status,COMMAND,PID | eval Process_Status = coalesce(Process_Status, "Unknown") | rename "CPU %" as "CPU %", "MEM %" as "MEM %" | fillnull value="N/A"
Hi @adent, Only one question, if you have events in more days, an event at previous day 23.59 is earlier than an event at second day 1.30, but calculatig min on the time value it's different. Do yo... See more...
Hi @adent, Only one question, if you have events in more days, an event at previous day 23.59 is earlier than an event at second day 1.30, but calculatig min on the time value it's different. Do you want to calculate min only on time or the earliest timestamp?  If min on time, you should run something like this: <your_search> | eval Time=strftime(_time,"%H.%M") | stats min(Time) AS Time BY event | append [ search <your_search> | eval Time=strftime(_time,"%H.%M") | stats min(Time) AS Time | eval Time=strftime(Time,"%H.%M") if the earliest timestamp, you could try something like this: <your_search> | stats earliest(_time) AS Time BY event | append [ search <your_search> | stats earlieste() AS _time | eval _time=strftime(_time,"%Y-%m-%d %H.%M") I could be more detailed if you share a sample of your logs and (if you already have) a search that you're using? Ciao. Giuseppe
Hi one old answer which describe how joins can/should do with splunk https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-i... See more...
Hi one old answer which describe how joins can/should do with splunk https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948 r. Ismo
I've had the exact same use case and found a work around. Added this just in case anyone else stumbles across it. Update the default option to noop, so it reverts to this when the checkbox is dese... See more...
I've had the exact same use case and found a work around. Added this just in case anyone else stumbles across it. Update the default option to noop, so it reverts to this when the checkbox is deselected <input type="checkbox" token="dedupresults"> <choice value="dedup src,dest">Dedup</choice> <default>noop</default> </input> And insert the token into your search.  .... | $dedupresults$ | ..... This will result in the search being either ... | dedup src,dest | .....  or ... | noop | .....
Using mvrange with time!  I think you also gave me this a long time ago for a different question, but with a unit instead of directly with _time. (mvexpand with info_max_time - info_min_time is too m... See more...
Using mvrange with time!  I think you also gave me this a long time ago for a different question, but with a unit instead of directly with _time. (mvexpand with info_max_time - info_min_time is too much.) Combining that lesson (thanks again!) and this formula, and working out some Splunk kinks, I can make it work with simple count. To start, I also realize that addinfo in makeresults will not work the same way as in a search command.  So, I modified my simulation strategy a little.  This will be my new baseline:   index = _internal | where _time < relative_time(now(), "-2h@h") ``` simulate zero-count buckets ``` | timechart span=1h count   The complete workaround will be index = _internal | where _time < relative_time(now(), "-2h@h") ``` simulate zero-count buckets ``` | bucket _time span=1h@h | chart count over _time | append [| makeresults | addinfo | eval hours = mvrange(0, round((info_max_time - info_min_time) / 3600)) | eval time = mvmap(hours, info_min_time + hours * 3600) | table time | mvexpand time | rename time as _time | bucket _time span=1h@h | eval count=0] | stats sum(count) as count by _time   Then, I should have noted in OP that my chart has a groupby clause.  So, I move my baseline to index = _internal sourcetype IN (splunkd, splunkd_access, splunkd_ui_access) | where _time < relative_time(now(), "-2h@h") ``` simulate zero-count buckets ``` | timechart span=1h count by sourcetype The workaround with groupby therefore is index = _internal sourcetype IN (splunkd, splunkd_access, splunkd_ui_access) ``` simulate zero-count buckets ``` | where _time < relative_time(now(), "-2h@h") | bucket _time span=1h@h | chart count over _time by sourcetype | append [| makeresults | addinfo | eval hours = mvrange(0, round((info_max_time - info_min_time) / 3600)) | eval time = mvmap(hours, info_min_time + hours * 3600) | table time | mvexpand time | rename time as _time | bucket _time span=1h@h | foreach splunkd, splunkd_access, splunkd_ui_access [eval <<FIELD>> = 0]] | chart sum(*) as * by _time This is super messy; it can be daunting if there are many values in groupby, or if values are unpredictable.  As you said, I should try to stick to timechart when dealing with time series.