All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Looks like you haven't evaluated _time | eval _time=case(row=0,strptime(StartTime,"%F %T.%6N"),row=1,strptime(StartTime,"%F %T.%6N"),row=2,strptime(EndTime,"%F %T.%6N"),row=3,strptime(EndTime,"%F %T... See more...
Looks like you haven't evaluated _time | eval _time=case(row=0,strptime(StartTime,"%F %T.%6N"),row=1,strptime(StartTime,"%F %T.%6N"),row=2,strptime(EndTime,"%F %T.%6N"),row=3,strptime(EndTime,"%F %T.%6N"))
I was ready to say the dedup wasn't the issue because I thought I previously crossed that off. The case_id is only supposed to have 2 events; when the case is opened and closed. So I thought each id... See more...
I was ready to say the dedup wasn't the issue because I thought I previously crossed that off. The case_id is only supposed to have 2 events; when the case is opened and closed. So I thought each id would only appear twice and the dedup was working in my favor. It looks like I didn't do my due diligence and make sure they're not updated again. Thanks for forcing me to check back and confirm the case_id's do repeat. I'm glad the solution is simple and something I overlooked.
I ran into a similar issue, and there could be at least two reasons for this. Here is the search the wizard generates: index=* OR index=_* _sourcetype="WinEventLog" | where _sourcetype="WinEventLog"... See more...
I ran into a similar issue, and there could be at least two reasons for this. Here is the search the wizard generates: index=* OR index=_* _sourcetype="WinEventLog" | where _sourcetype="WinEventLog" | head 100 1. The Ingest Sample Data wizard uses the "where" search command, which is case sensitive. So make sure the sourcetype case matches how it actually shows up in events.  WinEventLog is not the same is wineventlog. 2. The wizard also uses the _sourcetype field instead of the sourcetype field. That means that if there is any sourcetype transformation happing already, the _sourcetype field will have the original sourcetype. You can check this by searching for your events and adding this _souredcetype field (which is normally hidden). index=* sourcetype="winEventLog" | head 100 | eval orig_sourcetype=_sourcetype   Patrick
Hello Members, I would like to import/show data in a splunk dashboard. This data is results from a mysql query run by php to create an html page with the results in an html table. Most likely the ... See more...
Hello Members, I would like to import/show data in a splunk dashboard. This data is results from a mysql query run by php to create an html page with the results in an html table. Most likely the easiest way to do this would be to write the data to a csv file, and use splunk forwarder to send the data to splunk. The data needs to be checked once a day. I was wondering if there is a way to build a dashboard from the data via the splunk REST api, or import/forward the html page that get created from the mysql query. The query is run on a remote server. I looked at the splunk-sdk-python but its implementation is not user friendly - it requires docker, which I cannot get running for some reason.   I am open to any and all suggestions. thanks eholz
From your code I recived this: "_time",value ,0 ,1 ,1 ,0
Hi @ gcusello, I have similar problem with my use case. I am looking to filter out from two lookup files. I am not using any index. can you help me with to compare filed values from two difference... See more...
Hi @ gcusello, I have similar problem with my use case. I am looking to filter out from two lookup files. I am not using any index. can you help me with to compare filed values from two difference lookup files? below are the sample data from two lookup file 1.Firewall_NEW_Database.csv o Hostname Location Datacenter ABCD           US             xyz LMNO     SING         ABC 2,Firewall_OLD_Database.csv.  Firewall Location Datacenter  LMNO     SING         ABC ABCD       US         xyz EFGH        CAN      PQR in above two lookups I want to compare similar values  base on Hostname and Firewall fields and filter it out with count.    
You can have either SAML or LDAP authentication, but not both.  Splunk authentication is always available. To force Splunk authentication, go to http://<your Splunk URL>/en-us/account/login?loginTyp... See more...
You can have either SAML or LDAP authentication, but not both.  Splunk authentication is always available. To force Splunk authentication, go to http://<your Splunk URL>/en-us/account/login?loginType=Splunk.  The "en-us" part can be replaced with your own locale specifier.
Hello.   Thanks for your help. I have tried with the regex you suggested and with this configuration. [setindexHIGH] SOURCE_KEY = _raw REGEX = audits DEST_KEY = _MetaData:Index FORMAT = imp_h... See more...
Hello.   Thanks for your help. I have tried with the regex you suggested and with this configuration. [setindexHIGH] SOURCE_KEY = _raw REGEX = audits DEST_KEY = _MetaData:Index FORMAT = imp_high The same result. It is not working. We are receiving the events on the index imp_low If we run a search for the events, we can see the field named topic is being indexed. But if we set the view to  raw text of the event. I can not see the words topic or audits on the events raw text. It looks like that info is being removed from the event. Could it be because the props settings?
I removed the "by DateTime" clause and used the timewrap clause, it is giving me the output for last 24 hours correctly however I only receive files on the weekends and if I try to use this command t... See more...
I removed the "by DateTime" clause and used the timewrap clause, it is giving me the output for last 24 hours correctly however I only receive files on the weekends and if I try to use this command then it's giving me too many unwanted fields with no values.
Hello, Is it possible to configure Splunk to receive webhook with some information added to it and if it is can you give me link to the tutorial? 
I just saw your new message, it works even better and it's cleaner. Thank you for your help !  
We have recently setup SAML Authentication on our Splunk search which will be accessed by our Vendor using SSO authentication.  I wanted to enquire if LDAP authentication can also be enabled which wi... See more...
We have recently setup SAML Authentication on our Splunk search which will be accessed by our Vendor using SSO authentication.  I wanted to enquire if LDAP authentication can also be enabled which will be local to my team ? Also, what if SAML authentication or group mapping on our idP (Azure AD) breaks at some time and we will not be able to get into Splunk.  Is there or can we enable local admin login on the Splunk search which will be managed by our Splunk admin?
By manually setting for a source, it works, even if it is not optimal. | eval "Centreon"=if(isnull(Centreon),0,'Centreon')  
Try like this | table Etat, "Control-M", "Dynatrace", "ServicePilot", "Centreon" | fillnull value=0 "Control-M", "Dynatrace", "ServicePilot", "Centreon"
Hi @ITWhisperer, Thank you for your help, I have my source "Centreon" but it does not display 0 yet. I had already tried the "fillnull" but poorly because it created extra fields. Best Regard... See more...
Hi @ITWhisperer, Thank you for your help, I have my source "Centreon" but it does not display 0 yet. I had already tried the "fillnull" but poorly because it created extra fields. Best Regards, Rajaion
Splunk requires a hierarchical file system.  You can, however, move your frozen data to an object store, if you wish.
A majority of these are blacklisted from bundle replication and only exist on the SH cluster. I will check this out still. Thanks!
It depends on how often bundles are rebuilt on your system.  Start with 4 hours and add or subtract as necessary.
If you have CLI access then you can get that by looking at $SPLUNK_HOME/share/splunk/3rdparty/Copyright-for-CherryPy-*.txt.
When performing a query that creates a summary report, the associated search.log file shows: ResultsCollationProcessor - writing remote_event_providers.csv to disk. Then two hours later reports: ... See more...
When performing a query that creates a summary report, the associated search.log file shows: ResultsCollationProcessor - writing remote_event_providers.csv to disk. Then two hours later reports: StatsBuffer::read: Row is too large for StatsBuffer, resizing buffer.  row_size=77060 needed_space=11536 free_space=153653063 This is soon followed by lots of ~min-by-min output of: SummaryIndexProcessor - Using tmp_file=/opt/splunk/..../RMD....tmp messages. What might be happening in that two hour window?  We are running Splunk Enterprise 9.1.1 under Linux. @koronb_splunk @C_Mooney