All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am having a random issue where it seems characters are present in a field which cannot be seen. If you look in the results below, even though the results appear to match each other, Splunk does se... See more...
I am having a random issue where it seems characters are present in a field which cannot be seen. If you look in the results below, even though the results appear to match each other, Splunk does see these as 2 distinct values.  If I download and open the results, one of the two names has characters in it that are not seen when looking at the results in the Search App. If I open the file in my text editor, one of the two names is in quotes, if I open the file in Excel, one of the two names is preceded by ‚Äã. It feels like a problem with the underlying  lookup files (.csv),  however this problem is not consistent, only a very small percentage of results has this incorrect format (<.005%).  Trying to use regex or replace to remove non-alphanumeric values in a field does not seem to work, I am at a loss with it.  Any idea how to remove "non-visible" characters or correct this formatting?  
please consider upvote a new Splunk idea to get more attention: https://ideas.splunk.com/ideas/EID-I-2226
Yes i tried but in my case need to extract whole content.payload as one field.
Hi @Mfmahdi, you could truncate your events defining the max lenght of each event using the TRUNCATE option in props.conf. Otherwise you could define a regex to exclude from each event the part tha... See more...
Hi @Mfmahdi, you could truncate your events defining the max lenght of each event using the TRUNCATE option in props.conf. Otherwise you could define a regex to exclude from each event the part that you don't want. You should use the SEDCMD command in props.conf For more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.0/admin/Propsconf Ciao. Giuseppe
SO Rock On! from the GUI one is able to create a default view and pass that to all the users.  Bummer in that it looks like those settings are in a KV Store somewhere and not in a CONF?
I am a new user to Splunk and working to create an alert that triggers if it has been more than 4 hours since the last alert. I am using the following query, which I have test and come back with a va... See more...
I am a new user to Splunk and working to create an alert that triggers if it has been more than 4 hours since the last alert. I am using the following query, which I have test and come back with a valid result: index=my_index | stats max(_time) as latest_event_time | eval time_difference_hours = (now() - latest_event_time) / 3600 | table time_difference_hours Result: 20.646666667 When I go in and enable the alert, I set the alert to run every every. Additional I choose a custom condition as the trigger and use the following: eval time_difference_hours > 4 But the alert does not trigger. As you can see based on the result, it has been 20 hours since the last event was received in Splunk. Not sure what I am missing. I have also modified the query to include a time span with earliest=-24H and latest=now, but that did work either.    
Hi @Amit.Bisht, Thanks for letting me know. If the Community does not chime in, you can always contact AppD Support. How do I submit a Support ticket? An FAQ 
Splunk version: splunk-9.2.0.1 Host: Linux (Rocky 9) Hello, I am a new user testing Splunk. I installed the instance on Linux (Rocky 9). From reading various Q&A and docs, I see the location to ch... See more...
Splunk version: splunk-9.2.0.1 Host: Linux (Rocky 9) Hello, I am a new user testing Splunk. I installed the instance on Linux (Rocky 9). From reading various Q&A and docs, I see the location to change the instance address/IP and port is in a file within the installation directory called splunk-launch.conf, though it doesn't look like this file exists anymore. Please guide me through changing these settings in the latest version of Splunk (9.2.0.1) in Unix CLI. My goal is to change the web interface address from http://alpha:8000 to http://beta:8000. Thank you.
Hi All,   I have an alert that shows results for 7:00 Am to 7:01 AM with more than 20 results. the cron for the alert is: * 6-15 * * 1-5 condition: more than 4 results   I checked and found the... See more...
Hi All,   I have an alert that shows results for 7:00 Am to 7:01 AM with more than 20 results. the cron for the alert is: * 6-15 * * 1-5 condition: more than 4 results   I checked and found there were more than 4 results in the timefram 7:00 AM to 7:01 AM but the alert did not trigger an email alert.   Though the same alert did trigger at 8 AM.   On checking the internal logs I can see that at 7 AM the alert_actions="", but at 8 AM I can see alert_actions="email" which confirms that there was no email action.   What all things can I check further to check and confirm?
we are getting WAF log and the events are very big we need to drop some lines from the events that has no meaningful value not the whole event. @gcusello  thank you in advance.
Installed Splunk Add-on for Unix and Linux 9.0.0 not getting memory data for ubuntu server? Checks performed 1) Getting data for logical disk space and cpu but not memory 2) sar utility is insta... See more...
Installed Splunk Add-on for Unix and Linux 9.0.0 not getting memory data for ubuntu server? Checks performed 1) Getting data for logical disk space and cpu but not memory 2) sar utility is installed  enabled hardware, CPU, and df metric stanzas added index details too.
This is a really old post but I had the same problem.  A search query that appears to be helping me find these problems is: index=_internal sourcetype=splunkd log_level=ERROR component=HttpInpu... See more...
This is a really old post but I had the same problem.  A search query that appears to be helping me find these problems is: index=_internal sourcetype=splunkd log_level=ERROR component=HttpInputDataHandler The results are imperfect because they don't exactly match what's shown in the authentication failures, but in my case, it appears the errors are being caused by a source that is sending in blank/missing tokens.
I was not getting any data together, that was what was wrong... sorry for the miscommunication. I implemented your idea, and I am getting data now! Thank you!!! However, I am not getting the  Empl... See more...
I was not getting any data together, that was what was wrong... sorry for the miscommunication. I implemented your idea, and I am getting data now! Thank you!!! However, I am not getting the  Employee column filled. It might because of the data issue. But, I wanted to know if we can label the fields by source. For example. I have UserNumber in both sources that mean different things and name in both sources that mean different things... How can I help Splunk differentiate them? Is there any resources would you suggest? Thank you so much!
It depends what it is you are trying to do, and what you think is wrong. As it stands, PARENT_ACCOUNT is not a field beyond the stats command (since it isn't listed as an output field - dc just count... See more...
It depends what it is you are trying to do, and what you think is wrong. As it stands, PARENT_ACCOUNT is not a field beyond the stats command (since it isn't listed as an output field - dc just counts the distinct values of the field without listing them). For the "join", you don't need a join (and they usually should be avoided if possible as they are slow and have limitations). Try something like this: index=* sourcetype=transaction OR sourcetype=users | eval USERNUMBER=coalesce(USERNUMBER, NUMBER) | eventstats values(NAME) as Employee by USERNUMBER | stats dc(PARENT_ACCOUNT) as transactionMade values(Employee) as Employee by POSTDATE, USERNUMBER | table USERNUMBER Employee PARENT_ACCOUNT POSTDATE transactionMade
We have upwards of 250k forwarders in one of our environments and various levels of DNS caching that make it very difficult for a forwarder to request a deployment server IP from a DNS name and maint... See more...
We have upwards of 250k forwarders in one of our environments and various levels of DNS caching that make it very difficult for a forwarder to request a deployment server IP from a DNS name and maintain the connection consistently in order for it to get appropriate apps downloaded.  I have seen where a system will request an IP from a DNS name, make an initial connection to a deployment server, then send a DNS query again only to be given a different IP address, which causes issues with the forwarder trying to establish a consistent trusted connection to a deployment server.  That switch in deployment server destinations causes the forwarder to just try again later, until it can establish a consistent connection randomly. We put our deployment servers behind a load balancer before, but all the connections and logs show the forwarders coming from the same ip address.. something x-forwarded-for should help solve at our scale.
So, I have one source (transactions) with userNumber and another source (users) with number. I want to join both of them. In each source, they have different field names. I want my table to have the ... See more...
So, I have one source (transactions) with userNumber and another source (users) with number. I want to join both of them. In each source, they have different field names. I want my table to have the employees name, which in in source users, which I get in my 2nd query in the join separately. Below is my SPL as of now: index=* sourcetype=transaction | stats dc(PARENT_ACCOUNT) as transactionMade by POSTDATE, USERNUMBER | join left=L right=R where L.USERNUMBER=R.NUMBER [search sourcetype=users | stats values(NAME) as Employee by NUMBER] | table USERNUMBER Employee PARENT_ACCOUNT POSTDATE transactionMade What is it that I am doing wrong?
Question:  We are using Commvault Metallic to backup our O365 cloud-based user data in the Microsoft GCC.  How can we send the Commvault transaction logs to our on-prem Splunk servers for event analy... See more...
Question:  We are using Commvault Metallic to backup our O365 cloud-based user data in the Microsoft GCC.  How can we send the Commvault transaction logs to our on-prem Splunk servers for event analysis and reporting?  
Hi @parthiban, you hughlighted only the 403 response code, if you want the full string, you could use: | rex "\"processing_stage\": \"(?<response>[^\"]+)" that you can test at https://regex101.com... See more...
Hi @parthiban, you hughlighted only the 403 response code, if you want the full string, you could use: | rex "\"processing_stage\": \"(?<response>[^\"]+)" that you can test at https://regex101.com/r/mz4c1L/2  Ciao. Giuseppe
Hi everyone, this would be my first addition into community, have been using it for some time and it has been great. However now i have an issue i am not able to find answer or thread so thought to ... See more...
Hi everyone, this would be my first addition into community, have been using it for some time and it has been great. However now i have an issue i am not able to find answer or thread so thought to ask if someone is able to help. I have a search which gives me names of people, email addresses and other data. I wish to know if there is a way how to, when clicked on the value in field: Email it would open outlook and fill in the emails that were searched? Lets say I have 10 results,  and i would like for all 10 emails to be filled in the outlook email. i am able to do it through drilldown - click.value2 or row.fieldname..but those are going to fill in one specific email. I wish to have this capability but for group emails. Before I go and use sendemail, i was wondering if this can be done via mailto and possibly how? hope you all have good day !  
Indeed, I have searched and read all the threads I could find about this issue to no avail.