All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ... See more...
Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ----------------------------------------- (Values) ---> Server01, Windows, Windows Server 2019   But in my case, this lookup has 3000 field values, I want to know their source values in Splunk (This lookup was generated by a match condition with another, so I KNOW that these hosts are present in my Splunk env)   I basically need a way to do the following:   "| tstats values(sources) where index=* host=(WHATEVER IS IN MY LOOKUP HOST FIELD) by index, host"   But i can't seem to find a way, I did try to originally match the below:   | tstats values(source) where index=* by host, index | join type=inner host | [|inputlookup mylookup.csv | fields host | dedup host]   But my results were too large to handle by Splunk, plz help
Thank you for reply. I did a simple test on simple text event data and |eval test=case(x=="X", a+b) does work.  
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only rea... See more...
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only reason i did index=* was to show that ALL indexes are like this and no matter what I search this happens. What I'm the most confused about is why is the bottom portion (where the search results are) greyed out and I cant interact with it.  Here's the last few lines from the search.log if more is required i can send more of the log. The log is just really long. 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - sid=1712181568.6, newState=BAD_INPUT_CANCEL, message=Search auto-canceled 04-03-2024 18:00:38.937 ERROR SearchStatusEnforcer [11858 StatusEnforcerThread] - SearchMessage orig_component=SearchStatusEnforcer sid=1712181568.6 message_key= message=Search auto-canceled 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - State changed to BAD_INPUT_CANCEL: Search auto-canceled 04-03-2024 18:00:38.945 INFO TimelineCreator [11862 phase_1] - Commit timeline at cursor=1712168952.000000 04-03-2024 18:00:38.945 WARN DispatchExecutor [11862 phase_1] - Execution status=CANCELLED: Search has been cancelled 04-03-2024 18:00:38.945 INFO ReducePhaseExecutor [11862 phase_1] - Ending phase_1 04-03-2024 18:00:38.945 INFO UserManager [11862 phase_1] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.948 INFO UserManager [11858 StatusEnforcerThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 INFO DispatchManager [11855 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1712181568.6', username='b.morin') 04-03-2024 18:00:38.950 INFO UserManager [11855 searchOrchestrator] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 ERROR ScopedAliveProcessToken [11855 searchOrchestrator] - Failed to remove alive token file='/opt/splunk/var/run/splunk/dispatch/1712181568.6/alive.token'. No such file or directory 04-03-2024 18:00:38.950 INFO SearchOrchestrator [11852 RunDispatch] - SearchOrchestrator is destructed. sid=1712181568.6, eval_only=0 04-03-2024 18:00:38.952 INFO UserManager [11861 SearchResultExecutorThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO SearchStatusEnforcer [11852 RunDispatch] - SearchStatusEnforcer is already terminated 04-03-2024 18:00:38.961 INFO UserManager [11852 RunDispatch] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO LookupDataProvider [11852 RunDispatch] - Clearing out lookup shared provider map 04-03-2024 18:00:38.962 INFO dispatchRunner [10908 MainThread] - RunDispatch is done: sid=1712181568.6, exit=0  
Alternatively, without having to know the names of the fields | untable Name Date value | appendpipe [| stats count(eval(value > 0)) as value by Name | eval Date="Count_Of_Rows_With_Data"] |... See more...
Alternatively, without having to know the names of the fields | untable Name Date value | appendpipe [| stats count(eval(value > 0)) as value by Name | eval Date="Count_Of_Rows_With_Data"] | xyseries Name Date value
| eval Count_Of_Rows_With_Data=0 | foreach 20* [| eval Count_Of_Rows_With_Data=if('<<FIELD>>' > 0, Count_Of_Rows_With_Data+1, Count_Of_Rows_With_Data)]
There is likely still something wrong with the Java installation. I remember installing JDK-17 myself and then it did not work but then I tried another package and it worked. Where are you getting yo... See more...
There is likely still something wrong with the Java installation. I remember installing JDK-17 myself and then it did not work but then I tried another package and it worked. Where are you getting your JDK from?
There is a known issue where version 9.1.3 is unable to be installed or upgraded to: https://docs.splunk.com/Documentation/Forwarder/9.1.3/Forwarder/KnownIssues Could you try the suggested workaroun... See more...
There is a known issue where version 9.1.3 is unable to be installed or upgraded to: https://docs.splunk.com/Documentation/Forwarder/9.1.3/Forwarder/KnownIssues Could you try the suggested workaround: Install UF while passing the following feature flag: msiexec.exe /i $SPLUNK_MSI_PACKAGE USE_LOCAL_SYSTEM=1
While you could do an explicit exclusion as @marnall already showed it's probably not the most effective solution. Remember that by default inclusion is better than exclusion. So the question is whe... See more...
While you could do an explicit exclusion as @marnall already showed it's probably not the most effective solution. Remember that by default inclusion is better than exclusion. So the question is whether the events you want to exclude differ significantly from those you include? (Of course the best thing would be if you could differentiate them by an indexed field).
While not the most computationally efficient, you could use a negating keyword search for the string you would like to exclude: <yourSPL> NOT "PAM: Authentication failure for illegal user djras123 f... See more...
While not the most computationally efficient, you could use a negating keyword search for the string you would like to exclude: <yourSPL> NOT "PAM: Authentication failure for illegal user djras123 from" Or have it on a separate search line, if your SPL does not end on a "search" command: <yourSPL> | search NOT "PAM: Authentication failure for illegal user djras123 from"
I'm not very strong on log4j but I'd expect the HEC REST endpoint included in the URL.  
I am trying to exclude this from a search. They are almost all the same just the sshd instance changes can someone help me exclude? ras1-dan-cisco-swi error: PAM: Authentication failure for illegal ... See more...
I am trying to exclude this from a search. They are almost all the same just the sshd instance changes can someone help me exclude? ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[17284] ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[29461] ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[4064] ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[9450] Thanks guys besides excluding each one,
It does this effect but it works a bit differently. With octet counted option rsyslog split the input connection (because it works with tcp input only) based on the length of the event which should b... See more...
It does this effect but it works a bit differently. With octet counted option rsyslog split the input connection (because it works with tcp input only) based on the length of the event which should be given at the beginning of the event if I remember correctly. So the main problem is not that the new lines are encoded as #012 but that the events are not split at newline characters as they should be. If you turn of the octet counted option, the incoming tcp stream is broken into separate events on newline character so there is nothing to encode as #012 anymore.
I've tried using html codes like <p> or <b>test</b> and it makes no difference.  I'd like to format a much more complete summary of the event that's more thorough, human readable, and better formatte... See more...
I've tried using html codes like <p> or <b>test</b> and it makes no difference.  I'd like to format a much more complete summary of the event that's more thorough, human readable, and better formatted.  is there a way to do this?
I'm currently running Splunk 9.1.3 enterprise and Splunk DB Connect 3.16. When logging into Splunk I receive this error message in DB Connect that states, "Can not communicate with task server, check... See more...
I'm currently running Splunk 9.1.3 enterprise and Splunk DB Connect 3.16. When logging into Splunk I receive this error message in DB Connect that states, "Can not communicate with task server, check your settings." I made sure it was in the correct path in dbx_settings.conf and customized.java.path as well. Any suggestions would help.
Splunk Universal Forwarder upgrade to 9.1.3 is failing with Copy Error "Setup can not copy the file SplunkMonitor NoHandleDrv.sys".  Attached the error message
Thanks @danspav, this worked, although I had to add <default> to an empty string so it doesn't trigger the second condition on the initial page load. <default></default> Thanks much.
Try changing your user preferences to show times explicitly in UTC.  If the cron time changes to "10 18 * * *" then the system timestamp is Americas/New_York rather than UTC.
The Y argument can be anything valid for an eval statement.  IOW, if | eval test=Y works then | eval test=case(X, Y) should also work.
How do a get a count of rows that have a value greater than 0? Example below. The last column is what we are trying to generate. Name 2024-02-06 2024-02-08 2024-02-13 2024-02-15 Count_Of... See more...
How do a get a count of rows that have a value greater than 0? Example below. The last column is what we are trying to generate. Name 2024-02-06 2024-02-08 2024-02-13 2024-02-15 Count_Of_Rows_with_Data Pablo 1 0 1 0 2 Eli 0 0 0 0 0 Jenna 1 0 0 0 1 Chad 1 0 5 0 2  
Yes, I read the reply above and concur that this error occurs when the proper directory is not created and in our case it was "unknown" instead of the actual service name, ultimately we upgraded from... See more...
Yes, I read the reply above and concur that this error occurs when the proper directory is not created and in our case it was "unknown" instead of the actual service name, ultimately we upgraded from jdk 11 to jdk 21 and like magic it started working, so imagine this was a bug in jdk 11.