All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @leykmekoo, are you sure that the field to use as search key is exactly named "Email_Address" in both the searches and that values are compatible? if you manually extract a value from the subsea... See more...
Hi @leykmekoo, are you sure that the field to use as search key is exactly named "Email_Address" in both the searches and that values are compatible? if you manually extract a value from the subsearch, do you have results using this result in the main search? Ciao. Giuseppe
Hi @jamaluddin-k, forwarding data from GUI is a feature to send logs to another Splunk instance not to a syslog server. If you want to send logs to a syslog server, you have to follow the instructi... See more...
Hi @jamaluddin-k, forwarding data from GUI is a feature to send logs to another Splunk instance not to a syslog server. If you want to send logs to a syslog server, you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Forwarddatatothird-partysystemsd#Syslog_data Ciao. Giuseppe
Hi @Ammar, let me understand: is your issue that the search doesn't find any result or that the search finds results but you don't have any action? in the first  case, you have to debug your search... See more...
Hi @Ammar, let me understand: is your issue that the search doesn't find any result or that the search finds results but you don't have any action? in the first  case, you have to debug your search: I see that you didn't used the index definition, if the index to use isn't in the default search path, you cannot find anything: index=your_index host=192.168.1.1 "DST=192.168.1.174" | stats count AS Requests BY SRC | sort -Requests | where Requests>50 Then are you sure that in your logs you have a scring exaclty "DST=192.168.1.174"? this isn't a field definition used for the search: if you have the field DST (that usually is in lowercase!) you can use it without quotes. in the second case, you have to check the response actions configuration, which one did you configured? To be listed in the triggered alerts or to receive an email you have to configure this actions in the response actions, it isn't automatic by default. Ciao. Giuseppe
I audit windows computers. My search looks for the date, time, EventCode and Account_Name:   Date                        Time            EventCode  Account_Name 2023/08/29       16:09:30     4624 ... See more...
I audit windows computers. My search looks for the date, time, EventCode and Account_Name:   Date                        Time            EventCode  Account_Name 2023/08/29       16:09:30     4624                   jsmith   I would like the Time field to turn red when a user signs in after hours (1800 - 0559). I have tried clicking on the pen in the time column and selecting Color than Ranges. I always get error messages about not putting the numbers in correct order. What do I need to do?
Essentially, I'm trying to create a checklist of hosts I manage and which ones haven't had this event occur yet. This is the first half of what I want: index="index" source="C:\\Windows\\System32\\L... See more...
Essentially, I'm trying to create a checklist of hosts I manage and which ones haven't had this event occur yet. This is the first half of what I want: index="index" source="C:\\Windows\\System32\\LogFiles\\Log.log" "Detection!" earliest=-45m latest=now | chart count by host This query shows me 2 columns: host and # of times "Detection!" happened. I just need a 3rd column  or a continuation of the 2nd column that shows hosts with count 0 so I know which ones I still need to work on.
Hi @samsign, As I said you have three solutions: manually modify the inputs.conf file by SSH, create an empty local index on the HF that you can use only for this configuration, override the ind... See more...
Hi @samsign, As I said you have three solutions: manually modify the inputs.conf file by SSH, create an empty local index on the HF that you can use only for this configuration, override the index assignment on the HF (for more details see at https://community.splunk.com/t5/Getting-Data-In/overwrite-index-on-heavy-forwarder-based-on-port/m-p/507093). Ciao. Giuseppe
Thank you @bowesmana for quick response. I am writing down the exact query here. I have to combine both the queries to get Failure %  using timechart.   Query 1 ( Success ) : index=dl* ("Record_I... See more...
Thank you @bowesmana for quick response. I am writing down the exact query here. I have to combine both the queries to get Failure %  using timechart.   Query 1 ( Success ) : index=dl* ("Record_Inserted")  | fields msg.attribute.ticketId | rename msg.attribute.ticketId as ticketId | table ticketId,_time | timechart span=1d dc(ticketId)   Query 2 ( Failure ) : index=dl* ("Error_MongoDB")  | fields msg.attribute.ticketId | rename msg.attribute.ticketId as ticketId | table ticketId,_time | timechart span=1d dc(ticketId)    
Hello,  I've been attempting to use the results of a sub-search as input for the main search with no luck. I'm getting no results. Based on the query below, I was thinking of getting the field value... See more...
Hello,  I've been attempting to use the results of a sub-search as input for the main search with no luck. I'm getting no results. Based on the query below, I was thinking of getting the field value of Email_Address  from the sub-search and passing the result to the main search (in my mind only the Email_Address value). Finally, thinking the main search now has the resulting values from the sub-search (the Email_Address field), it then runs the main search using the passed value (Email_Address) as a search criteria to find events from another index. Is that the correct way to pass values as a searchable value or am I wrong? If I'm wrong, how can I do this? I thank you all in advance for your assistance!  index=firstindex Email_Address [search index=secondindex user="dreamer"      | fields Email_Address      | head 1 ] |table Date field1 field2 Email_Address
A new issue should be in a new question.
Thank you for the reply. I wasn't clear enough about the hosts already being in the index. If I run this query: index="index" source="C:\\Windows\\System32\\LogFiles\\Log.log" earliest=-45m latest=n... See more...
Thank you for the reply. I wasn't clear enough about the hosts already being in the index. If I run this query: index="index" source="C:\\Windows\\System32\\LogFiles\\Log.log" earliest=-45m latest=now I have 34 hosts listed. That Log.log is used for many things. It is constantly being updated. I want to know which hosts have had that Log.log updated but don't have the string "Detection!"
We are noticing that that same data received via the HTTP Event Collector is not searchable by Field like data received via our Forwarders. Note how EventName field IS NOT being picked up from Event... See more...
We are noticing that that same data received via the HTTP Event Collector is not searchable by Field like data received via our Forwarders. Note how EventName field IS NOT being picked up from Event received through HEC:   Note how EventName IS getting picked up from Event received through Forwarders:   It seems that the events received through the HEC are treated as one large Blob of data and are not parsed or indexed the same way by Splunk. I there anything that can be done in the request to the HEC or to an indexer to resolve this? Thanks.      
the ticket i had open was closed. you can generally work around the issue, if you can get the proper URL you need to do what you want to do and enter it directly.  the issue is known and being repai... See more...
the ticket i had open was closed. you can generally work around the issue, if you can get the proper URL you need to do what you want to do and enter it directly.  the issue is known and being repaired, but there is no notice of a resolution yet.   
I have a case open on it.  But all they are doing is suggesting clearing client caches and refreshing the server  by issuing the: http://<host>:<mport>/debug/refresh None of that works at all.  I've... See more...
I have a case open on it.  But all they are doing is suggesting clearing client caches and refreshing the server  by issuing the: http://<host>:<mport>/debug/refresh None of that works at all.  I've tried different clients, different browsers, incognito mode, etc.  Nothing attempted makes any difference. I really hope that someone figured it out and fixed it in the 9.1.1 code.
I see they just released 9.1.1 today.  Wondering if anyone can confirm if this issue is fixed in 9.1.1?  Additionally if anyone who opened a case could respond, I'd like to hear what support is sugge... See more...
I see they just released 9.1.1 today.  Wondering if anyone can confirm if this issue is fixed in 9.1.1?  Additionally if anyone who opened a case could respond, I'd like to hear what support is suggesting as I am currently experiencing this on some important screens. Thanks, David
I have another issue in comparing and want to compare should_be with server_installed_package . Sometime package installed is higher after patching . Example given below for git , if the number 2 < 3... See more...
I have another issue in comparing and want to compare should_be with server_installed_package . Sometime package installed is higher after patching . Example given below for git , if the number 2 < 3 , it should mark as completed , else it should check for the next digit if it is 2. and it should check for another number .    CI Installed  shouldbe server_installed_package   server1 git-2.31.1-3.el8_7 git-2.39.3-1.el8_8 git-3.40.3-1.el8_8 Not complete
As an app add-on creater we don't have control on the Indexes available on the Splunk Cloud on user environment.  In App's Input.config we set the index= default.   In the add-on flow.. we add a dat... See more...
As an app add-on creater we don't have control on the Indexes available on the Splunk Cloud on user environment.  In App's Input.config we set the index= default.   In the add-on flow.. we add a data input  configuration ..with new input stream urls and  the index, it should point to.. shown in the below image    As you see the  index is populated with "default" How can we enable the ability to show up all available indexes in the drop down If the desired index is not in the available list .. how can we enable user to  input  a string and trigger a search If user don't want to pick a index then.. the default should be selected 
@gcusello  Thanks for the response.   As an app creater we don't have control on the Indexes available on the Splunk Cloud on user environment.  In App's Input.config we set the index= default.   d... See more...
@gcusello  Thanks for the response.   As an app creater we don't have control on the Indexes available on the Splunk Cloud on user environment.  In App's Input.config we set the index= default.   during the app install flow the configuration shows the new input stream  .. which index, it should assign to..  .. how can we achieve ability to show up all available indexes in the drop down If the desired index is not in the available list .. how can we enable user to  input  a string and trigger a search If user don't want to pick a index then.. the default should be selected    Hope you got the clarity on my ask. 
You could configure a lookup which allows for wildcards, e.g. "+1408717712*" equates to site A etc.
Assuming you have a "domain" field in both the lookup file and an index, this should get you started. index=foo [ | inputlookup denieddomains.csv | field domain | format ] The subsearch (inside squ... See more...
Assuming you have a "domain" field in both the lookup file and an index, this should get you started. index=foo [ | inputlookup denieddomains.csv | field domain | format ] The subsearch (inside square brackets) fetches the contents of the lookup table (I made up a name - replace it with your own), extracts only the "domain" field, then formats the results into a search string which is then returned to the main search for execution.