All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You normally need to find the events that show you the data, so these need to be logged first and then into Splunk, so check to see if the below events are there and search for those based on the use... See more...
You normally need to find the events that show you the data, so these need to be logged first and then into Splunk, so check to see if the below events are there and search for those based on the user. Search for eventid field - I cant remeber the exact name, but it should be there. The below events many help find the data you are looking for for others check on Google plenty there.  EventCode=4624: Successful user logon (interactive logon). EventCode=4625: Failed user logon attempt. EventCode=4648: Logon using explicit credentials (e.g., "Run As" or services).
I'm sure you know already that the Universal Forwarder just forwards data from files, from event logs, or from scripts. Some example scenarios where you would want a heavy forwarder include: * You ... See more...
I'm sure you know already that the Universal Forwarder just forwards data from files, from event logs, or from scripts. Some example scenarios where you would want a heavy forwarder include: * You are collecting logs using apps like DBConnect, Salesforce, HTTP modular input, etc. (These apps tend to be managed using the web interface, so a heavy forwarder is better) * You would like to perform parsing operations on data before it is indexed. E.g. you might want to send certain data to one indexer cluster and other data to another indexer cluster. * You would like to collect events using the HTTP Event Collector (HEC), but you don't want to expose the HEC interface of your indexers.
HF's are a full Splunk instance, the UF is like an agent. We mainly use the HF if we want to ingest data via a Technical Add-on that uses modular inputs using python etc, or do you want to forward d... See more...
HF's are a full Splunk instance, the UF is like an agent. We mainly use the HF if we want to ingest data via a Technical Add-on that uses modular inputs using python etc, or do you want to forward data, or parse / mask the data before it's sent to Splunk indexers (So these are some of its use cases for a HF) The UF can do some parsing for some common data formats but can also be used for forwarding, but mainly its used to collect logs. So, think about your use case, example, do you need to collect some logs of logs like AWS, so that would be better use a HF and forward the data. (You can use the SH but then you may need more resources) For the UF - are you just collecting logs, or you want to forward some data on.
Hello @richgalloway  getIndex should should return value admin_audit from the eval; search at the end should return the content/events  of the index admin_audit
Yes, that is correct, I'm looking for splunk search using rex or any other built in function which will select the event if that has any of those spoofing English letters. You analysis for the proble... See more...
Yes, that is correct, I'm looking for splunk search using rex or any other built in function which will select the event if that has any of those spoofing English letters. You analysis for the problem is great. thank you for the analysis and suggestion here.
Hello Splunk community.  I have been searching for this question quite a lot and went through many articles, but it’s still unclear to me.   Can someone please explain when would we want to us... See more...
Hello Splunk community.  I have been searching for this question quite a lot and went through many articles, but it’s still unclear to me.   Can someone please explain when would we want to use heavy forwarder instead of universal forwarder. I would really appreciate a real use case, when in order to get data into splunk we would want to go with heavy forwarder instead of universal forwarder, and why. Thanks in advance, for spending time replying to my post
I have a summary index that pulls in normalized data from 2 different sources (entirely different applications that catalog and categorize the data differently).  In situations where I have events in... See more...
I have a summary index that pulls in normalized data from 2 different sources (entirely different applications that catalog and categorize the data differently).  In situations where I have events in the summary index from both sources, they are 99.99% of the time duplicates of eachother, however source 1 has better data fidelity than source 2.  Lets say if I weighted High Fidelity source with a 1 and Low fidelity source with a 2, I'm trying to find a way to filter with a by clause on another field which both events have  (like device, or ip_address).  something logically like: |where source=coalesce("sourcename1","sourcename2") by field but where doesnt take a by clause. In the past I've done similar things by coalescing each field I want with a case statement, but in this case there are quite a few and I'm wondering if there's a more efficient way of doing it. any ideas on the best way to accomplish this?  
I have a multi-select like this:     <input token="name" type="multiselect"> <label>Name</label> <choice value="*">ALL</choice> <prefix>(</prefix> <suffix>)</suffix> ... See more...
I have a multi-select like this:     <input token="name" type="multiselect"> <label>Name</label> <choice value="*">ALL</choice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>name="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>name</fieldForLabel> <fieldForValue>name</fieldForValue> <search> <query>index=my_index | dedup name | sort name</query> </search> </input>     It correctly produces a token $name$ with value:     (name="VALUE1" OR name="VALUE2" ... )       But I have a need to make the token look like:     (name="VALUE1" OR name="VALUE1.*" OR name="VALUE2" OR name="VALUE2.*" ... )     because if "VALUE1" is selected in the multi-select, I want events that match both "VALUE1" and "VALUE1.*" (note the dot star, not just star).   But I cannot just match "VALUE1*" as that will bring in events that have a different value which BEGINS with "VALUE1" which I don't want.   So the question is - how can I utilize the values TWICE in the token generation? I can't wrap my head around how I might be able to achieve this.
@abhi04 Hello Abhi, Please use the below regex.  Does my answer above solve your question? If yes, spare a moment to accept the answer and vote for it. Thanks.
Hey @richgalloway Thank you for your quick response. But it's not working, not getting any result. Just to let you know  spIndex_name is the name of the index and also eval value getIndex is not ret... See more...
Hey @richgalloway Thank you for your quick response. But it's not working, not getting any result. Just to let you know  spIndex_name is the name of the index and also eval value getIndex is not returning the index name admin_audit.
Hi, I have been developing apps on Splunk SOAR for some time and I have recently encountered App errors that say "Failed to read message from connector: <app_name>" on multiple instances.  This is ... See more...
Hi, I have been developing apps on Splunk SOAR for some time and I have recently encountered App errors that say "Failed to read message from connector: <app_name>" on multiple instances.  This is mostly observed in cases where I am processing responses from a rest call and filtering data and adding the dictionaries to action results.  The data structure looks perfect and compared to working actions in the same app I see no difference in action results.  Also, the Action works fine when tested in App wizard IDE (even for a published app). When tested through a playbook or run manually in a container, I start getting this message again. This is very strange for me as I am stuck on this problem for couple weeks and unable to solve it. I have debugged all data that is mapped to action resulsts results and summary. Also the json file output datapaths are good (have even removed all outputs from json file except default ones to see if they are the issue) I am facing this issues on two totally different apps on different instances. (Instance 1 running on 5.3.5 and instance 2 on 6.0.   Any help is highly appreciated. An example of proceed response from IDE is pasted below for reference.  I am using this app for interacting with an LLM. As you can see the app runs perfectly fine. I see no data missing or any app errors here. {"identifier": "text_prompt", "result_data": [{"data": [{"inputTextTokenCount": 4, "results": [{"tokenCount": 50, "outputText": "\nA traffic jam is a situation where a large number of vehicles are moving at a slower speed than usual, often due to an obstruction or congestion in the road. This can cause delays and frustration for drivers, as they struggle to move through the congest", "completionReason": "LENGTH"}]}], "extra_data": [], "summary": {"output_text": "\nA traffic jam is a situation where a large number of vehicles are moving at a slower speed than usual, often due to an obstruction or congestion in the road. This can cause delays and frustration for drivers, as they struggle to move through the congest", "output_tokens": 50, "input_tokens": 4}, "status": "success", "message": "Output text: \nA traffic jam is a situation where a large number of vehicles are moving at a slower speed than usual, often due to an obstruction or congestion in the road. This can cause delays and frustration for drivers, as they struggle to move through the congest, Output tokens: 50, Input tokens: 4", "parameter": {"prompt_text": "explain traffic jam", "model": "amazon.titan-text-lite-v1", "temperature": 0, "top_p": 1, "max_output_token": 50}, "context": {}}], "result_summary": {"total_objects": 1, "total_objects_successful": 1}, "status": "success", "message": "1 action succeeded", "exception_occured": false, "action_cancelled": false}  
The search command doesn't handle field names on both sides of the equals sign.  Use where, instead. index =meta_info sourcetype=meta:info| search group_name=admingr AND spIndex_name=admin_audit | e... See more...
The search command doesn't handle field names on both sides of the equals sign.  Use where, instead. index =meta_info sourcetype=meta:info| search group_name=admingr AND spIndex_name=admin_audit | eval getIndex=spIndex_name | where index=getIndex  
I wrote a simple query to parse my Windows Event Security logs to look for a user account, however I am looking to add onto this and find out which devices the accounts are running on. index="wineve... See more...
I wrote a simple query to parse my Windows Event Security logs to look for a user account, however I am looking to add onto this and find out which devices the accounts are running on. index="wineventlog" source="WinEventLog:Security" user="domainaccount" My end goal is to be able to type in a domain account in my search and find what device its associated with or is running as a service under.  
Hello,  I have a use case to get the index name from the field of one of the index/sourcetype and use that index name value to search the content of that index, but not getting any result. Here is ... See more...
Hello,  I have a use case to get the index name from the field of one of the index/sourcetype and use that index name value to search the content of that index, but not getting any result. Here is what I did: index =meta_info sourcetype=meta:info| search group_name=admingr AND spIndex_name=admin_audit | eval getIndex=spIndex_name | search index=getIndex Any help will be highly appreciated, thank you!  
The starting format of logs in regex101    
Hi all,  our regex is unable to extract host from the logs, can you pleas ehelp with the correct regex.though this regex works when checked in regex101, not sure why unable to extract [hostextrac... See more...
Hi all,  our regex is unable to extract host from the logs, can you pleas ehelp with the correct regex.though this regex works when checked in regex101, not sure why unable to extract [hostextract] REGEX = ^.*\w+\s+\d+\s+(?:\d+:){2}\d+\s+(?P<test>\w+)\s+ SOURCE_KEY = _raw DEST_KEY = MetaData:Host FORMAT = host::$1     e.g. logs format   May 1 08:35:30 10.98.6.249 May 1 08:35:30 host_abc   Apr 10 08:45:20 10.98.6.249 Apr 10 08:45:20 host_def   May 1 08:35:30 10.98.6.249 May 1 08:35:30 host_ghi    
Can anyone help in resolving this issue? Noticed eStreamer stopped sending syslogs logs via HF to Splunk SH (on prem) and I tried to run this search. The splunk version is 9.2.1 /opt/splunk/etc/apps... See more...
Can anyone help in resolving this issue? Noticed eStreamer stopped sending syslogs logs via HF to Splunk SH (on prem) and I tried to run this search. The splunk version is 9.2.1 /opt/splunk/etc/apps/TA-eStreamer/bin/splencore.sh status Below was the message I got  bash-4.2$ /opt/splunk/etc/apps/TA-eStreamer/bin/splencore.sh status Traceback (most recent call last): File "./estreamer/preflight.py", line 33, in <module> import estreamer.crossprocesslogging File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/__init__.py", line 27, in <module> from estreamer.connection import Connection File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/connection.py", line 23, in <module> import ssl File "/opt/splunk/lib/python3.7/ssl.py", line 98, in <module> import _ssl # if we can't import it, let the error propagate ImportError: libssl.so.1.0.0: cannot open shared object file: No such file or directory I upgraded the add-on from 5.1.9 to 5.2.9 and still got the same message. Is there a fix or should open a support case. Your suggestions are welcome __PRESENT __PRESENT
Hi folks,  our field parsing/extraction has broken across all sourcetypes (nginx, log4j, aws:elb's, fix,custom formats as well). The most recent infra event we had was an increase of file storage ove... See more...
Hi folks,  our field parsing/extraction has broken across all sourcetypes (nginx, log4j, aws:elb's, fix,custom formats as well). The most recent infra event we had was an increase of file storage over a month ago.  If our error were related to a single sourcetype I would assume I have to review my props.conf file for the associated app and sourcetype,but in this case it appears something more systemic is occurring. As someone with limited knowledge of splunk admin,where can I look to narrow my search to the root cause?  Trying to RTFM, and am familiar with the "general" log structure but not sure exactly what I'm looking for. (an error/exception on restart directly calling out a props.conf file? An index related exception? idk) Would btool help me confirm if my props.conf files are correctly loading? Is there something would indicate a failure of log parsing?   Splunk Enterprise single-instance 9.2.0.1 on an 8 Core 32GB instance   Cheers //A
Hi @saad76, I’m a Community Moderator in the Splunk Community. This question was posted 9 years ago, so it might not get the attention you need for your question to be answered. We recommend that... See more...
Hi @saad76, I’m a Community Moderator in the Splunk Community. This question was posted 9 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi all, I have a created a dashboard, in that dashboard  i added a text filter, to that text filter i need to add place holder like below Thanks in Advance!