All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi @Anonymous  / @Anonymous  I have recently started using your "File/Directory Information Input" app. I believe that it does not work with splunk python3 - which is the default version in splunk8... See more...
Hi @Anonymous  / @Anonymous  I have recently started using your "File/Directory Information Input" app. I believe that it does not work with splunk python3 - which is the default version in splunk8.  Is this something that you still work on and maintain? I have been able to get it working if I set splunk system/server.conf to "python.version = python2".  It would be better though if I could set this within the app and not splunk system wide. In general it has been working for me when I use within a UF that has the latest version of python2,  so 2.7.5-89 works in Linux. It does have some issues around the 'file_filter` when filtering, this again seemed to work closer to as expected for me when python2 was patched to latest minor release version of 2.7.5-89 But when it works it is great and does exactly what I want, so thank you very much regards
We can sent emails to recipients but it does not include the host name that generated the alerts. 
We are just now beginning to deploy the splunkforwarder for linux in our Large organization We are running the agent as s systemd service the file  /opt/splunkforwarder/etc/apps/<redacted>/local/de... See more...
We are just now beginning to deploy the splunkforwarder for linux in our Large organization We are running the agent as s systemd service the file  /opt/splunkforwarder/etc/apps/<redacted>/local/deploymentclient.conf has these settings [deployment-client] clientName =<redacted> [target-broker:depolymentServer] targetUri = splunkuf-<redacted>:8089 # # The splunk UF will phone home every 14400 sec = 4hrs # 300sec=5min 900sec=15min phoneHomeIntervalInSecs = 900   On RedHat 8 .4 systems there are no issues when I run systemctl status SplunkForwarder -l but on RedHat 7.9 i get the following Splunk> Australian for grep. Checking prerequisites... Management port has been set disabled; cli support for this configuration is currently incomplete. Invalid key in stanza [target-broker:depolymentServer] in /opt/splunkforwarder/etc/apps/<redacted>local/deploymentclient.conf, line 12: phoneHomeIntervalInSecs (value: 900). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Checking conf files for problems... Done Checking default conf files for edits... if I move the  phoneHomeIntervalInSecs  entry to under the [deployment-client] I don't get the error [deployment-client] clientName =<redacted> phoneHomeIntervalInSecs = 900 [target-broker:depolymentServer] targetUri = splunkuf-<redacted>:8089 # # The splunk UF will phone home every 14400 sec = 4hrs # 300sec=5min 900sec=15min # phoneHomeIntervalInSecs = 900   Please advise on the correct location for this setting   Thanks  
Aloha,  We’ve a reporting requirement to create a Pie chart using 2 input files.  So far we’ve successfully created Bar charts with inputlookup files.    Could you please advise the best way to c... See more...
Aloha,  We’ve a reporting requirement to create a Pie chart using 2 input files.  So far we’ve successfully created Bar charts with inputlookup files.    Could you please advise the best way to create a Pie chart using 2 inputlookup files?   Thanks in advance.
Hi, i'm trying to execute a query on SQL editor. But the problem is, the editor doesn't let me write or execute any query and i have the admin permission level. This is the first time i'm having this... See more...
Hi, i'm trying to execute a query on SQL editor. But the problem is, the editor doesn't let me write or execute any query and i have the admin permission level. This is the first time i'm having this issue.
I hate hardcoding dynamic things. Sooner or later those thing break. I have data with fields   ... forecast_2020=400, forecast_2021=500, forecast_2022=650, forecast_2023=800 ...   and in some sea... See more...
I hate hardcoding dynamic things. Sooner or later those thing break. I have data with fields   ... forecast_2020=400, forecast_2021=500, forecast_2022=650, forecast_2023=800 ...   and in some search I need to use the correct forecast for the current year. What I could do is   ... | eval year=strftime(now(),"%Y"), forecast=case(year==2021, forecast_2021, year==2022, forecast_2022, year==2023, forecast_2023, 1==1, 0)   This definitely results in problems in 2024; by then I will have a field forecast_2024 but nobody will remember to update the search. I'd rather use something along these lines:   ... | eval year=strftime(now(),"%Y"), forecast=coalesce(forecast_{year}, 0)   However, the {} trick can only be used on the left hand side in eval. Is there any similar cool trick which works on the right hand side?
I have a dashboard where i have date filter in DD/MM/YYY format and have a table which shows data for the dates selected in the drop down and it filters based on the date selected. Now i have a requ... See more...
I have a dashboard where i have date filter in DD/MM/YYY format and have a table which shows data for the dates selected in the drop down and it filters based on the date selected. Now i have a required to additionally show data of 7 days back too. Means the data currently showing for the date selected along with the data which was 7 days back too. For example : If date selected on drop down is 07/01/2021 then 1st table should show data for 7th Jan and 2nd table should show data for 1st Jan. My fields are like HOST (server hostname) and RESULT (shows 2 values as either PASS or FAIL). so the table i have created is  <index = XXX | | stats count(eval(searchmatch("PASS"))) AS PASS count(eval(searchmatch("FAIL"))) AS FAIL by HOST> This gives me 2 fields as PASS and FAIL count against the HOST for the date selected. My requirement is merge both the dates data into one table, but even i can make it in 2 separate tables then it should do.. Can any one help guide me..
Need to declare in spl Include only those file that has ended with date not .bz2 (I don’t want to use  NOT)   Here is spl: index="myindex" source="/data/app/20211209/CUS/app.log.*" | dedup source|... See more...
Need to declare in spl Include only those file that has ended with date not .bz2 (I don’t want to use  NOT)   Here is spl: index="myindex" source="/data/app/20211209/CUS/app.log.*" | dedup source| table source   Return: /data/app/20211209/CUS/app.log.2021-12-09.bz2 /data/app/20211209/CUS/app.log.2021-12-09   I try below spl but doesn’t return result source="/data/app/20211209/CUS/app.log.*.”   Any idea? Thanks
We use Splunk for storing and analyzing Windows security events. We now want to start storing firewall events related tot management ports.   I plan to use the following for retrieving the relevant... See more...
We use Splunk for storing and analyzing Windows security events. We now want to start storing firewall events related tot management ports.   I plan to use the following for retrieving the relevant data from the Windows security log whitelist9 = EventCode="(?:515[67])" Message="(?i)Direction\:\t+Inbound" Message="Destination\sPort\:\t+(135|139|445|3389|5985|5986)"   I would like to store these events using a diiferent source type than the other events from [WinEventLog://Security]   How can I achieve this?
Hi everyone, I'm new here and having a problem filtering of numbers from a message. message: Generated non direct deposit usages: 4 I just want to get the number. the number can be of any length. ... See more...
Hi everyone, I'm new here and having a problem filtering of numbers from a message. message: Generated non direct deposit usages: 4 I just want to get the number. the number can be of any length. Who can hel Thx
| makeresults | eval _raw = "user_name machine_name event_name logon_time user1 machine1 logon 12/9/2021 7:20 user1 machine1 logoff 12/9/2021 7:22 user1 machine1 logon 12/9/2021 8:20 user1 machi... See more...
| makeresults | eval _raw = "user_name machine_name event_name logon_time user1 machine1 logon 12/9/2021 7:20 user1 machine1 logoff 12/9/2021 7:22 user1 machine1 logon 12/9/2021 8:20 user1 machine1 logoff 12/9/2021 8:22" | multikv forceheader=1 | eval _time = strptime(logon_time, "%m/%d/%Y %H:%M") ```| reverse``` | fields - _raw linecount | eval login_time = if (event_name == "logon", logon_time, null()), logout_time = if (event_name == "logoff", logon_time, null()) | transaction endswith=(event_name=logon) startswith=(event_name=logoff) user_name machine_name ```| transaction startswith=(event_name=logon) endswith=(event_name=logoff) user_name machine_name``` | eval session_duration = tostring (duration, "duration") | rename login_time as logon_time | table user_name machine_name event_name logon_time logout_time session_duration how do i repplace the below section of query with results from a query _raw = "user_name machine_name event_name logon_time user1 machine1 logon 12/9/2021 7:20 user1 machine1 logoff 12/9/2021 7:22 user1 machine1 logon 12/9/2021 8:20 user1 machine1 logoff 12/9/2021 8:22 my base query yields data like below wic needs go to _raw index=foo source = bar | fields user_name, macine_name, event_name, logon_time this query will result 1000s of rows that may look like belwo data user1 machine1 logon 12/9/2021 7:20 user1 machine1 logoff 12/9/2021 7:22 user1 machine1 logon 12/9/2021 8:20 user1 machine1 logoff 12/9/2021 8:22 I need to feed those thousands of events to _raw to makeresults. Any help is much appreciated. thanks
Some of my users under LDAP are not displayed from the UI, however, all the missed users are still functioning. The behaviour is the same even of I use the admin account to login. Here is the etc/sy... See more...
Some of my users under LDAP are not displayed from the UI, however, all the missed users are still functioning. The behaviour is the same even of I use the admin account to login. Here is the etc/system/local/authorize.conf       [role_admin] accelerate_search = enabled change_own_password = enabled delete_by_keyword = disabled edit_search_schedule_window = enabled edit_sourcetypes = enabled edit_statsd_transforms = enabled embed_report = enabled export_results_is_visible = enabled get_metadata = enabled get_typeahead = enabled grantableRoles = admin importRoles = input_file = enabled list_inputs = enabled list_metrics_catalog = enabled output_file = enabled pattern_detect = enabled request_pstacks = enabled request_remote_tok = enabled rest_apps_view = enabled rest_properties_get = enabled rest_properties_set = enabled rtsearch = enabled run_multi_phased_searches = enabled schedule_search = enabled search = enabled srchIndexesDefault = *;_* srchMaxTime = 8640000 upload_lookup_files = enabled        
Baseline works on both percentage calculation and deviation, so how does this work so efficiently?
Hey I am having difficulties trying to extract fields from my splint logs. They are in the format of ’{“field”: “value1”, “field2”: “value2”}’  I’ve tried using spath but it doesn’t seem to work. I ... See more...
Hey I am having difficulties trying to extract fields from my splint logs. They are in the format of ’{“field”: “value1”, “field2”: “value2”}’  I’ve tried using spath but it doesn’t seem to work. I think the issue is that the json object is enclosed in single quotes so splunk doesn’t recognise it as json. 
I am encountering an issue when using a subsearch in a tstats query. Specifically, I am seeing the count of events increase as well as taking much longer to run than a query without the subsearch (1.... See more...
I am encountering an issue when using a subsearch in a tstats query. Specifically, I am seeing the count of events increase as well as taking much longer to run than a query without the subsearch (1.5s vs 85s). Note that in my case the subsearch is only returning one result, so I wouldn't expect such a pronounced performance impact. The examples below use Splunk's own data model that searches over the _audit index, so the performance issue is not as apparent since there is not as many events as in my use-case. | tstats count FROM datamodel=internal_audit_logs WHERE Audit.action="add"  Returns a count of 33. | tstats count FROM datamodel=internal_audit_logs WHERE [ | makeresults annotate=f | fields -_time | eval Audit.action="add" ] Returns a count of 46. This issue is not reproducible with index queries.
Hello Fellow Splunkers! Can someone please explain the need for deploying Splunk with the minimum hardware requirements? If the specs are reduced is their data loss or just lagging? I constantly ge... See more...
Hello Fellow Splunkers! Can someone please explain the need for deploying Splunk with the minimum hardware requirements? If the specs are reduced is their data loss or just lagging? I constantly get this question and have not been able to find anything on it in the Splunk documentation.   Thanks in advance for the help!
I am following the docs and when it asks for logging level it only allows you to choose 1 level.   What if I wanted multiple levels?   It only seems to allow one to be selected. Select a new loggin... See more...
I am following the docs and when it asks for logging level it only allows you to choose 1 level.   What if I wanted multiple levels?   It only seems to allow one to be selected. Select a new logging level from the drop-down menu. Change logging level On Splunk Web, go to the Splunk Add-on for Cisco Meraki, either by clicking the name of this add-on on the left navigation banner or by going to Manage Apps, then clicking Launch App in the row for the Splunk Add-on for Cisco Meraki. Click the Configuration tab. Click the Logging tab. Select a new logging level from the drop-down menu. Click Save to save your configurations.
Edit: After working with Splunk support, this issue is fixed in TA version 8.5.0.   I recently upgraded our Windows TA from 8.0.0 to 8.2.0. I've noticed that with the Event IDs relating to users be... See more...
Edit: After working with Splunk support, this issue is fixed in TA version 8.5.0.   I recently upgraded our Windows TA from 8.0.0 to 8.2.0. I've noticed that with the Event IDs relating to users being removed or added to groups (4728, 4729, 4732) the user removed or added is logged by Windows with their full DN. Splunk before the upgrade was pulling the full DN and extracting it into the user field. Now it seems to not be doing the same. Our DNs contain "Lastname, Firstname" with the log having that first comma escaped.       12/09/2021 00:00:00 AM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4732 EventType=0 Type=Information ComputerName=domaincontroller TaskCategory=Security Group Management OpCode=Info RecordNumber=1111111111 Keywords=Audit Success Message=A member was added to a security-enabled local group. Subject: Security ID: CONTOSO\user_admin Account Name: user_admin Account Domain: CONTOSO Logon ID: 0xD5D5D5DA Member: Security ID: CONTOSO\FLastname Account Name: CN=Lastname\, Firstname,OU=Users,DC=CONTOSO,DC=com Group: Security ID: CONTOSO\Group_RW Group Name: Group_RW Group Domain: CONTOSO         This is extracted correctly into the Account_Name field, though both the Subject and Member users are placed into Account_Name as an mv field. For some reason, when this same value is extracted into user, it gets extracted only as "Lastname\"   I've done a diff on the default\props and transforms and didn't see any changes to the extractions of this field that I can find, and I had no customization here. I'm at a bit of a loss as to why this would even change. We are using the WinEventLog:Security sourcetype as well. Other extractions seem to be working as intended.
Hi, I have 2 sites 2 Indexer cluster master 2 deployers 30 Indexers 30 Search Heads What is the Replication factor and search head factor?
Hello, I have some text I indexing, In the middle I have csv table, and some information at end, look like this Text text text text. #begining of csv# Aa,BBC,cc,dd 22,1,444,2 44,22,11,3 #end o... See more...
Hello, I have some text I indexing, In the middle I have csv table, and some information at end, look like this Text text text text. #begining of csv# Aa,BBC,cc,dd 22,1,444,2 44,22,11,3 #end of csv# Text text text How to index only the lines in the csv as events Thank you Dov