All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Sure you can do that - you can either populate the dropdown with static options with the month name and add the .csv on the end for the value, e.g. <input type="dropdown" token="month"> <l... See more...
Sure you can do that - you can either populate the dropdown with static options with the month name and add the .csv on the end for the value, e.g. <input type="dropdown" token="month"> <label>Month</label> <choice value="july">July</choice> <choice value="august">August</choice> ... more choices </input> then your search is | inputlookup $month$.csv ... or you could make your lookup dynamic and look for lookups that match a pattern, e.g. <input type="dropdown" token="month"> <label>Month</label> <search> <query> | rest splunk_server=local /servicesNS/-/-/data/lookup-table-files | where 'eai:acl.app'="your_app_name" | fields title | where match(title, "^(january|february|march|april|may|june|july|august|september|october|november|december).csv$") | eval month=replace(title, "\.csv", ""), month=upper(substr(month, 1,1)).substr(month, 2) </query> </search> <fieldForLabel>month</fieldForLabel> <fieldForValue>title</fieldForValue> </input>  
Hi agreed but why source and sourcetype os mixed up ? it does not goes what i have mentioned in inputs.conf. how do i fix it ?   DC01.xxx.xxx</Computer><Security/></System><EventData><Data Name='... See more...
Hi agreed but why source and sourcetype os mixed up ? it does not goes what i have mentioned in inputs.conf. how do i fix it ?   DC01.xxx.xxx</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>CORP\ADmaint</Data><Data Name='SubjectUserName'>ADmaint</Data><Data Name='SubjectDomainName'>CORP</Data><Data Name='SubjectLogonId'>0x1b73fc</Data><Data Name='PrivilegeList'>SeSecurityPrivilege SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeDebugPrivilege host = DC01 source = WinEventLog:Security sourcetype = WinEventLog this source and sourcetype are mixed and not according to inputs.conf  
i had similar issue.   i created new index for my windows servers and define the sourcetype in inputs.conf and deploy the _TA_Windows apps search works fine but source type and source are interchange... See more...
i had similar issue.   i created new index for my windows servers and define the sourcetype in inputs.conf and deploy the _TA_Windows apps search works fine but source type and source are interchanged. any thoughts ?  
I got it working again. I had to recopy the coldtofrozenexample.py and edit for my environment. I was missing some sections from the script.
I like this query, but I have indices with long names that incorporate underscores "_" and the split command is not working in this scenario. Quotations did not work, but astericks did.  I do not wan... See more...
I like this query, but I have indices with long names that incorporate underscores "_" and the split command is not working in this scenario. Quotations did not work, but astericks did.  I do not want to use asterisks as I will be generating alerts and do not want extra characters in the message.  Please let me know how to use split with this naming convention for indices.  | metasearch index=AB_123_CDE OR index=CD_345_EFG OR index=EF_678_HIJ | stats count by index | append [ noop | stats count | eval index=split("AB_123_CDE;CD_345_EFG;EF_678_HIJ",";") | mvexpand index ] | stats max(count) as count by index | where count = 0
I am sorry, but this sounds like a bad excuse for not thinking this through. I have never seen that it was  popular, recommended or even supported to install the forwarder with the server. If you hav... See more...
I am sorry, but this sounds like a bad excuse for not thinking this through. I have never seen that it was  popular, recommended or even supported to install the forwarder with the server. If you have any good links on this then please supply. If the docker people wants this, then create a solution for them, and leave the rest of us alone.  Imagine all the automation (puppet, ansible, self coded and so on) that now have to be changed. Monitoring of the user and service needs to be changed.  There must be a ton of code/checks/monitoring that needs to be changed.   In regards to when this change was implemented, i did a quick install test (wiped each time):  rpm -i splunkforwarder-7.3.0-657388c7a488-linux-2.6-x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-8.0.4-767223ac207f-linux-2.6-x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-8.2.6-a6fe1ee8894b-linux-2.6-x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-9.0.0-6818ac46f2ec-linux-2.6-x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-9.0.5-e9494146ae5c.x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-9.1.0.1-77f73c9edb85.x86_64.rpm - owner & group = splunkfwd and just to verify i upgraded from 9.0.5 to 9.1.0.1 and yes the owner changed from splunk to splunkfwd. So be careful out there. To be fair support said that this should be fixed in coming 9.1.1 - retaining the previous user. Even the documentation uses "splunk" as the owner all the way from version 9.0 to 9.0.5 https://docs.splunk.com/Documentation/Forwarder/9.0.5/Forwarder/Installanixuniversalforwarder So i simply don't buy the excuse. Now if we are installing the 9.1.0.1 and wants to keep using "splunk" as the owner, we will have to manually , make the install, create "splunk" user, update the unit file, chown SPLUNK_HOME to splunk, update SPLUNK_OS_USER=splunk  in splunk-launch.conf and then delete "splunkfwd", According to support. Just why.  That said, good or bad reason, it does not change the fact that this is done out of the blue with no prior warning. Same happened with the change from initd/systemd and when you changed the service name.  Sorry for the rant, it just makes me annoyed that this should have been handled completely different imo.
-I am running an alert which is not triggering email actions when using real-time option.   The alert is used to  search for hosts which have not sent logs in the last 5 minutes. -For example, I sh... See more...
-I am running an alert which is not triggering email actions when using real-time option.   The alert is used to  search for hosts which have not sent logs in the last 5 minutes. -For example, I shut down a host for testing and wait 5 minutes. I then manually use the search string and specify time frame (e.g. last 15 minutes)- I am able to obtain results. However,  even though the same search was configured in the form of an alert running in real time, it produces no results nor does it trigger an email. Here is the search I am using:     index=* | stats max(_time) as latest by host | eval recent= if(latest > relative_time(now(),"-5m"),1,0). realLatest = strftime(latest, "%Y-%M-%D %H%M%S") | fields - latest | where recent = 0 | rename host AS Host, realLatest AS "Latest Timestamp" | table Host, "Latest Timestamp"      
I posted my script in this thread and it's the sample script, just edited for my environment. It's been working fine for a long time until this issue arose.
Good afternoon, I am trying to show information from a csv which is static, but will be replaced as time goes on I awas wondering there was a way to make the CSV filenames a dropdown option in ... See more...
Good afternoon, I am trying to show information from a csv which is static, but will be replaced as time goes on I awas wondering there was a way to make the CSV filenames a dropdown option in an input which would correlate in the searches below in the dashboard.    For example Input dropdown values: july.csv august.csv   And the search would be | inputlookup $august.csv$ ...   Is this an option or is there a better way to do this?
I selected from time picker like 8/14/23 00:00:00 8/15/23 00:00:00
I'm trying to add an input within a canvas as is indicated here: https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/inputConfig#Inputs_in_the_canvas I have been dragging my in... See more...
I'm trying to add an input within a canvas as is indicated here: https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/inputConfig#Inputs_in_the_canvas I have been dragging my input to the canvas without luck. Then I found this video that shows a configuration option for in or above canvas: https://www.youtube.com/watch?v=eyXAa6xxrso However, on my dashboard, I do not have these options. Is there a configuration that I am missing?   Why am I unable to move my inputs to the canvas? Splunk Cloud Version: 9.0.2209.3
Hello All, I have seen this post (which is helpful) "How to get the on click marker gauge redirect to a dashboard?"   I would like to run a search instead of setting a variable ... See more...
Hello All, I have seen this post (which is helpful) "How to get the on click marker gauge redirect to a dashboard?"   I would like to run a search instead of setting a variable on a panel. Is this possible? The javascript writes the value to a $toke$ variable on a second panel. I would like to run a search - the filler gauge does not have an option for a drilldown. Yes - the easy way is to just click the search magnify glass.   Thanks, eholz1
Good question. Since Forwarder 9.0, the "least privilege mode" (run Splunk service as NON ROOT) is by default enabled, whereas Enterprise does not have such feature(yet?). Previously Forwarder and En... See more...
Good question. Since Forwarder 9.0, the "least privilege mode" (run Splunk service as NON ROOT) is by default enabled, whereas Enterprise does not have such feature(yet?). Previously Forwarder and Enterprise share same account `splunk`, so Forwarder creates a dedicated user `splunkfwd` since 9.0 to prevent user permission conflicts. Today it's very popular to install the Forwarder & Enterprise on the same instance - Install Forwarder in the base image(so that all dockerized instances are monitored by default) to monitor the platform internal metrics such as CPU, Memory, network resources, system files, etc, and install Enterprise to ingest data from external resources, or host indexing/search.  So this is just a default account change, just like the default user changed from LocalSystem to Virtual Account on Windows since Forwarder 9.1, as a security improvement.  
As we don’t know content of your script, we cannot really help you more. I propose that you try to find someone who knows enough Python and you look together what’s wrong in script and fix it.
Good to hear that this is working! BUT still you have this issue on your TZ definition on log file. If you ever get logs from TZ which has its xx:30 (like some Indian like -05:30) shift instead of fu... See more...
Good to hear that this is working! BUT still you have this issue on your TZ definition on log file. If you ever get logs from TZ which has its xx:30 (like some Indian like -05:30) shift instead of full hour, those will get a wrong UTC time on splunk.
The packet field appears to encoded or encrypted.  You would have to get with the vendor to determine how to make the field legible , if it can be done at all.  It's possible this is data straight of... See more...
The packet field appears to encoded or encrypted.  You would have to get with the vendor to determine how to make the field legible , if it can be done at all.  It's possible this is data straight off the wire and that you would need the SSL certificate to process the data - not something one can do in SPL.
We managed to resolve the the "type 28 / 500 internal server" Enterprise Security installation error by cleaning out /tmp.  
This error told that your DNS service cannot found it for that name. You should fix it first and then check if UF works after that.
When you are enabling splunk boot start (please check exact syntax from docs) as systemd managed version, splunk create systemd config file into /etc/systemd/system. Its name is Splunk’s.service or s... See more...
When you are enabling splunk boot start (please check exact syntax from docs) as systemd managed version, splunk create systemd config file into /etc/systemd/system. Its name is Splunk’s.service or something similar. You could change this if needed/wanted by splunk-launch.conf. As I earlier said, when you are running that “splunk enable boot-start ….” as a root, it creates systemd conf file with standard values based on your host current physical attributes. If you want to restrict memory usage just decrease that memory parameter. I suppose that this restricts the Splunk’s memory usage.
We got the same error for all the members of the cluster.  When it occurred, we had to restart Splunk on each member.