All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Rohit, Are you still getting this error above in your screenshot,  I can help you in this case but your screenshot's error is very generic.  It is possible that you got this error for many diffe... See more...
Hi Rohit, Are you still getting this error above in your screenshot,  I can help you in this case but your screenshot's error is very generic.  It is possible that you got this error for many different reasons. Could you please share the database agent logs you deployed with me? Then we can easily find a solution to ix this problem. Thanks Cansel
I've not used durable searches, so I am not totally sure how they work in terms of timestamp data in the index, however, have you tried to include the durable_cursor in your stats like this index=_i... See more...
I've not used durable searches, so I am not totally sure how they work in terms of timestamp data in the index, however, have you tried to include the durable_cursor in your stats like this index=_internal sourcetype=scheduler earliest=-1h@h latest=now ``` Find the latest durable_cursor for this saved search ``` | eventstats max(durable_cursor) as durable_cursor by savedsearch_name ``` and include it in the stats ``` | stats latest(status) as FirstStatus max(durable_cursor) as durable_cursor by scheduled_time savedsearch_name | search NOT FirstStatus IN ("success","delegated_remote") However, I don't see how you can do the if test when you do not have next_scheduled_time in the _internal index data - you will need to use the REST api to get next scheduled time. Or maybe you can make the eventstats/stats do this | eventstats max(durable_cursor) as durable_cursor max(eval(if(status="success", scheduled_time, null()))) as max_success_scheduled_time by savedsearch_name | stats latest(status) as FirstStatus max(durable_cursor) as durable_cursor max(max_success_scheduled_time) as max_success_scheduled_time by scheduled_time savedsearch_name but I am unfamiliar with durable searches, so don't know how these timestamps work
Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ... See more...
Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ----------------------------------------- (Values) ---> Server01, Windows, Windows Server 2019   But in my case, this lookup has 3000 field values, I want to know their source values in Splunk (This lookup was generated by a match condition with another, so I KNOW that these hosts are present in my Splunk env)   I basically need a way to do the following:   "| tstats values(sources) where index=* host=(WHATEVER IS IN MY LOOKUP HOST FIELD) by index, host"   But i can't seem to find a way, I did try to originally match the below:   | tstats values(source) where index=* by host, index | join type=inner host | [|inputlookup mylookup.csv | fields host | dedup host]   But my results were too large to handle by Splunk, plz help
Thank you for reply. I did a simple test on simple text event data and |eval test=case(x=="X", a+b) does work.  
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only rea... See more...
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only reason i did index=* was to show that ALL indexes are like this and no matter what I search this happens. What I'm the most confused about is why is the bottom portion (where the search results are) greyed out and I cant interact with it.  Here's the last few lines from the search.log if more is required i can send more of the log. The log is just really long. 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - sid=1712181568.6, newState=BAD_INPUT_CANCEL, message=Search auto-canceled 04-03-2024 18:00:38.937 ERROR SearchStatusEnforcer [11858 StatusEnforcerThread] - SearchMessage orig_component=SearchStatusEnforcer sid=1712181568.6 message_key= message=Search auto-canceled 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - State changed to BAD_INPUT_CANCEL: Search auto-canceled 04-03-2024 18:00:38.945 INFO TimelineCreator [11862 phase_1] - Commit timeline at cursor=1712168952.000000 04-03-2024 18:00:38.945 WARN DispatchExecutor [11862 phase_1] - Execution status=CANCELLED: Search has been cancelled 04-03-2024 18:00:38.945 INFO ReducePhaseExecutor [11862 phase_1] - Ending phase_1 04-03-2024 18:00:38.945 INFO UserManager [11862 phase_1] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.948 INFO UserManager [11858 StatusEnforcerThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 INFO DispatchManager [11855 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1712181568.6', username='b.morin') 04-03-2024 18:00:38.950 INFO UserManager [11855 searchOrchestrator] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 ERROR ScopedAliveProcessToken [11855 searchOrchestrator] - Failed to remove alive token file='/opt/splunk/var/run/splunk/dispatch/1712181568.6/alive.token'. No such file or directory 04-03-2024 18:00:38.950 INFO SearchOrchestrator [11852 RunDispatch] - SearchOrchestrator is destructed. sid=1712181568.6, eval_only=0 04-03-2024 18:00:38.952 INFO UserManager [11861 SearchResultExecutorThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO SearchStatusEnforcer [11852 RunDispatch] - SearchStatusEnforcer is already terminated 04-03-2024 18:00:38.961 INFO UserManager [11852 RunDispatch] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO LookupDataProvider [11852 RunDispatch] - Clearing out lookup shared provider map 04-03-2024 18:00:38.962 INFO dispatchRunner [10908 MainThread] - RunDispatch is done: sid=1712181568.6, exit=0  
Alternatively, without having to know the names of the fields | untable Name Date value | appendpipe [| stats count(eval(value > 0)) as value by Name | eval Date="Count_Of_Rows_With_Data"] |... See more...
Alternatively, without having to know the names of the fields | untable Name Date value | appendpipe [| stats count(eval(value > 0)) as value by Name | eval Date="Count_Of_Rows_With_Data"] | xyseries Name Date value
| eval Count_Of_Rows_With_Data=0 | foreach 20* [| eval Count_Of_Rows_With_Data=if('<<FIELD>>' > 0, Count_Of_Rows_With_Data+1, Count_Of_Rows_With_Data)]
There is likely still something wrong with the Java installation. I remember installing JDK-17 myself and then it did not work but then I tried another package and it worked. Where are you getting yo... See more...
There is likely still something wrong with the Java installation. I remember installing JDK-17 myself and then it did not work but then I tried another package and it worked. Where are you getting your JDK from?
There is a known issue where version 9.1.3 is unable to be installed or upgraded to: https://docs.splunk.com/Documentation/Forwarder/9.1.3/Forwarder/KnownIssues Could you try the suggested workaroun... See more...
There is a known issue where version 9.1.3 is unable to be installed or upgraded to: https://docs.splunk.com/Documentation/Forwarder/9.1.3/Forwarder/KnownIssues Could you try the suggested workaround: Install UF while passing the following feature flag: msiexec.exe /i $SPLUNK_MSI_PACKAGE USE_LOCAL_SYSTEM=1
While you could do an explicit exclusion as @marnall already showed it's probably not the most effective solution. Remember that by default inclusion is better than exclusion. So the question is whe... See more...
While you could do an explicit exclusion as @marnall already showed it's probably not the most effective solution. Remember that by default inclusion is better than exclusion. So the question is whether the events you want to exclude differ significantly from those you include? (Of course the best thing would be if you could differentiate them by an indexed field).
While not the most computationally efficient, you could use a negating keyword search for the string you would like to exclude: <yourSPL> NOT "PAM: Authentication failure for illegal user djras123 f... See more...
While not the most computationally efficient, you could use a negating keyword search for the string you would like to exclude: <yourSPL> NOT "PAM: Authentication failure for illegal user djras123 from" Or have it on a separate search line, if your SPL does not end on a "search" command: <yourSPL> | search NOT "PAM: Authentication failure for illegal user djras123 from"
I'm not very strong on log4j but I'd expect the HEC REST endpoint included in the URL.  
I am trying to exclude this from a search. They are almost all the same just the sshd instance changes can someone help me exclude? ras1-dan-cisco-swi error: PAM: Authentication failure for illegal ... See more...
I am trying to exclude this from a search. They are almost all the same just the sshd instance changes can someone help me exclude? ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[17284] ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[29461] ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[4064] ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[9450] Thanks guys besides excluding each one,
It does this effect but it works a bit differently. With octet counted option rsyslog split the input connection (because it works with tcp input only) based on the length of the event which should b... See more...
It does this effect but it works a bit differently. With octet counted option rsyslog split the input connection (because it works with tcp input only) based on the length of the event which should be given at the beginning of the event if I remember correctly. So the main problem is not that the new lines are encoded as #012 but that the events are not split at newline characters as they should be. If you turn of the octet counted option, the incoming tcp stream is broken into separate events on newline character so there is nothing to encode as #012 anymore.
I've tried using html codes like <p> or <b>test</b> and it makes no difference.  I'd like to format a much more complete summary of the event that's more thorough, human readable, and better formatte... See more...
I've tried using html codes like <p> or <b>test</b> and it makes no difference.  I'd like to format a much more complete summary of the event that's more thorough, human readable, and better formatted.  is there a way to do this?
I'm currently running Splunk 9.1.3 enterprise and Splunk DB Connect 3.16. When logging into Splunk I receive this error message in DB Connect that states, "Can not communicate with task server, check... See more...
I'm currently running Splunk 9.1.3 enterprise and Splunk DB Connect 3.16. When logging into Splunk I receive this error message in DB Connect that states, "Can not communicate with task server, check your settings." I made sure it was in the correct path in dbx_settings.conf and customized.java.path as well. Any suggestions would help.
Splunk Universal Forwarder upgrade to 9.1.3 is failing with Copy Error "Setup can not copy the file SplunkMonitor NoHandleDrv.sys".  Attached the error message
Thanks @danspav, this worked, although I had to add <default> to an empty string so it doesn't trigger the second condition on the initial page load. <default></default> Thanks much.
Try changing your user preferences to show times explicitly in UTC.  If the cron time changes to "10 18 * * *" then the system timestamp is Americas/New_York rather than UTC.
The Y argument can be anything valid for an eval statement.  IOW, if | eval test=Y works then | eval test=case(X, Y) should also work.