All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've got a dashboard that's parsing logs to show the latest status of a rundeck job completion for multiple executions. As part of that query I've identified a field for the rundeck_job_id that's the... See more...
I've got a dashboard that's parsing logs to show the latest status of a rundeck job completion for multiple executions. As part of that query I've identified a field for the rundeck_job_id that's the ID of each of these jobs. It looks like I can use Drilldown to link to a custom URL, which I would like to be https://rundeck.server/project/Project_Name/execution/show/$rundeck_job_id$  Ideally this would let users find their running/failed/etc job in the table, then click to go to that URL the corresponds to their execution. Problem is, I can't seem to get the drilldown to evaluate tokens, or I'm not setting those up correctly.  I've tried $row.rundeck_job_id$ and I've tried setting this token in various places in the dashboard, but that doesn't seem to work. It seems to be evaluating $rundeck_job_id$ or $row.rundeck_job_id$  literally and going to a broken page.
I have made a report from Dashboards & Reports, category  Reports. The report works well, but the frontpage don't seem to support national characters. When using the norwegian characters øæå - the ... See more...
I have made a report from Dashboards & Reports, category  Reports. The report works well, but the frontpage don't seem to support national characters. When using the norwegian characters øæå - the report replaces the characters with question marks.  Is it possible to set up a codepage that supports the norwegian characters for the PDF report frontpage?
Hi, I have the following search: | inputlookup ldap_assets.csv | lookup existing_assets dns output ip bunit category city country owner priority | outputlookup create_empty=false createinapp=true... See more...
Hi, I have the following search: | inputlookup ldap_assets.csv | lookup existing_assets dns output ip bunit category city country owner priority | outputlookup create_empty=false createinapp=true override_if_empty=false merged_assets.csv The 'ldap_assets.csv' contains a list of assets and their attributes. The search then does a lookup command on 'existing_assets' lookup which contains other asset attributes (a manually created list).  The search then outputs results to a merged_assets.csv.  The problem im having is that if a record exists in existing_assets, but it doesnt exist in ldap_assets.csv, it is being excluded from results of the outputlookup command. I would like those existing records to still be included in the merged_assets.csv file. So basically I want the two files to merge without any exclusions after the lookup. Can somebody provide assistance on where on going wrong? Thanks. 
I want to offer a "Save file" dialog over a python script.   The Script and Splunk runs on a server, the dialog should appeare on the clients browser and the file should be saved on the clients fil... See more...
I want to offer a "Save file" dialog over a python script.   The Script and Splunk runs on a server, the dialog should appeare on the clients browser and the file should be saved on the clients file system. Is this possible without using JavaScript? Are there any build-in dialog functions?   TL;DR:   I need to transfer a file from Splunk on server to client over python, but don't know the client ip adress.
Hello I have an Splunk Enterprise instance in Windows 10 up and running. Now I installed the "Splunk Cloud Gateway" to get the data available via mobile app. The installation on the splunk server i... See more...
Hello I have an Splunk Enterprise instance in Windows 10 up and running. Now I installed the "Splunk Cloud Gateway" to get the data available via mobile app. The installation on the splunk server is ok. The app installed correctly and the server is running. When I try to access the data of different Dashboards via app I get different errors, the views are loading extremly long an so on. If I take a look at the browser on the Splunk Enterise Indexer itself and try to find out the status of the Cloud Gateway with "Cloud Gateway Status Dashboard" the views "Websocket Python Process", "Websocket Sosium Process", "Subscription Python Process" and "Subscription Sodium Process" getting an error. I took a look at the log an found this error: Error in 'scgpstree' command: External search command exited unexpectedly with non-zero error code 1.  Do you have any suggestions what is going on? The view "Websocket Disconnections Counts" also shows some disconnects. Using the Apps and Dashbaords via browser at the indexer itself I don't have any problems. I have only one instance with one indexer active. No other infrastructure than this one system. Regrads, Jens
how do I covert the date that is in dropdown to "Y-M-D".  I dont want the other stuff next to it (T00:).  index="main" | eval _time = strptime('close_approach_data{}.close_approach_date', "%Y-%m-%d... See more...
how do I covert the date that is in dropdown to "Y-M-D".  I dont want the other stuff next to it (T00:).  index="main" | eval _time = strptime('close_approach_data{}.close_approach_date', "%Y-%m-%d") | table _time | dedup _time  
Hi all, i need to execute an alert each 2hours from 8AM to 11PM. I would like the alert to be scheduled 30mn after. Ex:8h30 10h30 12h30 14h30 16h30 18h30 20h30 22h30. Thanks for your help. PS: i... See more...
Hi all, i need to execute an alert each 2hours from 8AM to 11PM. I would like the alert to be scheduled 30mn after. Ex:8h30 10h30 12h30 14h30 16h30 18h30 20h30 22h30. Thanks for your help. PS: i tried 30 8,10,12,14,16,18,20,23 * * * but it does not seem to work well.
Can someone tell me what this log record means? I see MANY of them across all my widows hosts but I am unsure of why its invoking winprintmon.exe? We ARE monitoring windows events on this machine BUT... See more...
Can someone tell me what this log record means? I see MANY of them across all my widows hosts but I am unsure of why its invoking winprintmon.exe? We ARE monitoring windows events on this machine BUT not printer monitoring.       02/03/2021 02:02:29 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=6417 EventType=0 Type=Information ComputerName=hostname.domain.com TaskCategory=System Integrity OpCode=Info RecordNumber=3903849 Keywords=Audit Success Message=The FIPS mode crypto selftests succeeded. Process ID: 0x1e2c Process Name: C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe       I am just unsure why its invoking winprintmon.  It seems to run every minute. Thanks as always
Hello,   I have logs with this format and I want to count the number of error code by SNR.   2021-01-25 12:59:18,355 - [INFO] SNR: 917173 2021-01-25 12:59:21,868 - [INFO] 0x100:S_Home 2021-01-2... See more...
Hello,   I have logs with this format and I want to count the number of error code by SNR.   2021-01-25 12:59:18,355 - [INFO] SNR: 917173 2021-01-25 12:59:21,868 - [INFO] 0x100:S_Home 2021-01-25 12:59:22,312 - [INFO] 0x130:S_Cycle 2021-01-25 12:59:22,314 - [INFO] 0x154:S_VACON 2021-01-25 12:59:22,316 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:22,629 - [INFO] new file cycle: 17320 2021-01-25 12:59:23,141 - [INFO] 0x154:S_VACON 2021-01-25 12:59:23,142 - [INFO] 0x151:S_MQLON 2021-01-25 12:59:23,741 - [INFO] 0x154:S_VACON 2021-01-25 12:59:23,742 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:25,645 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:25,646 - [INFO] 0x156:S_VACOFF 2021-01-25 12:59:25,721 - [INFO] 0x100:S_Home 2021-01-25 12:59:27,095 - [INFO] 0x130:S_Cycle 2021-01-25 12:59:27,102 - [INFO] 0x154:S_VACON 2021-01-25 12:59:27,104 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:27,425 - [INFO] new file cycle: 17321 2021-01-25 12:59:27,952 - [INFO] 0x154:S_VACON 2021-01-25 12:59:27,953 - [INFO] 0x151:S_MQLON 2021-01-25 12:59:28,856 - [INFO] 0x154:S_VACON 2021-01-25 12:59:28,857 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:30,450 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:30,451 - [INFO] 0x156:S_VACOFF 2021-01-25 12:59:30,504 - [INFO] 0x100:S_Home 2021-01-25 12:59:31,624 - [INFO] 0x130:S_Cycle 2021-01-25 12:59:31,625 - [INFO] 0x154:S_VACON 2021-01-25 12:59:31,627 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:31,951 - [INFO] new file cycle: 17322 2021-01-25 12:59:32,478 - [INFO] 0x154:S_VACON 2021-01-25 12:59:32,479 - [INFO] 0x151:S_MQLON 2021-01-25 12:59:33,432 - [INFO] 0x154:S_VACON 2021-01-25 12:59:33,433 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:34,920 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:34,922 - [INFO] 0x156:S_VACOFF 2021-01-25 12:59:34,993 - [INFO] 0x100:S_Home 2021-01-25 12:59:36,321 - [INFO] 0x130:S_Cycle 2021-01-25 12:59:36,325 - [INFO] 0x154:S_VACON 2021-01-25 12:59:36,327 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:18,355 - [INFO] ADU identified SNR: 917175 2021-01-25 12:59:37,190 - [INFO] 0x154:S_VACON 2021-01-25 12:59:37,190 - [INFO] 0x151:S_MQLON 2021-01-25 12:59:38,157 - [INFO] 0x154:S_VACON 2021-01-25 12:59:38,158 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:39,674 - [INFO] 0x152:S_MQLOFF 2021-01-25 12:59:39,676 - [INFO] 0x156:S_VACOFF 2021-01-25 12:59:39,742 - [INFO] 0x100:S_Home 2021-01-25 12:59:40,902 - [INFO] 0x130:S_Cycle 2021-01-25 12:59:40,904 - [INFO] 0x154:S_VACON 2021-01-25 12:59:40,906 - [INFO] 0x152:S_MQLOFF In this case, the result I expect is the following result: Error code 0x130: 4 occurrences for SNR 917173, 1 occurrence for SNR: 917175 Error code 0x154: 7 occurrences for SNR 917173, 3 occurrences for SNR: 917175 etc. I use the transaction command in order to group the events: transaction startswith="SNR" But when doing this, the number of error code is reduced to the number of event, which is wrong for me. Do you have an idea how to reach the result I expect
Hi, I'm having the hardest time trying to figure out how to pass an event field into a variable argument to be used in a macro.  This is my test_macro accepting one argument.    | eval $sub_arg$=s... See more...
Hi, I'm having the hardest time trying to figure out how to pass an event field into a variable argument to be used in a macro.  This is my test_macro accepting one argument.    | eval $sub_arg$=subject | sendemail to="myemail address" format="html" server="myserver address" use_tls=1 subject= $sub_arg$     Test SPL : | makeresults | eval subject = "Test Subject" `test_macro(subject)`   The subject comes into / validated to "subject" and not "Test Subject". What am I doing wrong? Thank you! Chris 
How do I display the below as a bubble chart? When I click the bubble chart for my search query its not working properly and is showing  _time as 0. I want a bubble chart for: x axis = _time y axe... See more...
How do I display the below as a bubble chart? When I click the bubble chart for my search query its not working properly and is showing  _time as 0. I want a bubble chart for: x axis = _time y axes = "y-axis" size of bubble = "bubble_size"      
Guys. I have the following query below that shows the results by hosts, it works very well. However, I need to replace the host with another value, example below   index=text (host=host1 OR host=h... See more...
Guys. I have the following query below that shows the results by hosts, it works very well. However, I need to replace the host with another value, example below   index=text (host=host1 OR host=host2 OR host=host3 OR host=host4)) timechart span=1h count by host host1 = Valuea host2 = Valueb host3 = Valuec host4= Valued   What is the best way to make this replacement using eval or a lookup? Att.
Hello, we are using Splunk v8.1.1 I have one user with multiple roles, so he can access multiple indexes and hosts. The user need additionally access to one host in a multi-host index. - role1 ->... See more...
Hello, we are using Splunk v8.1.1 I have one user with multiple roles, so he can access multiple indexes and hosts. The user need additionally access to one host in a multi-host index. - role1 -> index1 -> all hosts - role2 -> index2 -> all hosts - role3 -> index3 -> one host (foo) of many So I created a new role3 for index3 and a search filter for the host -> (host::foo) Owning the three roles the user has only access to the host foo.   How can I limit the access to one host in a multi-host index without affect other roles?   Best Regards Christian      
Hi, hoping someone can help with this as its been a while since I used Splunk and I can't seem to figure this out! I'm trying to import a csv that has a field with a time format of: [20210102] 06:5... See more...
Hi, hoping someone can help with this as its been a while since I used Splunk and I can't seem to figure this out! I'm trying to import a csv that has a field with a time format of: [20210102] 06:58.10 I have tried TIME_FORMAT=%Y%m%d %H:%M.%S and I get a _time field that is correct except it doesn't show the seconds. the above is returned as 02/01/2021 06:58:00 I'm pretty sure its to do with the way the square brackets are being interpreted but can't seem to work out how to ignore them. Adding them into the TIME_FORMAT string doesn't help. Thanks.
I am working on a Splunk app that uses the KV Store. When a request is made to the kv store a pop up window appears on the screen prompting the user to enter their username and password. This is th... See more...
I am working on a Splunk app that uses the KV Store. When a request is made to the kv store a pop up window appears on the screen prompting the user to enter their username and password. This is the code making the request:    fetchFoos() { return fetch( "servicesNS/nobody/myapp/storage/collections/data/foo", { mode: "cors", credentials: "include", } ) .then((response) => response.json()) .then((json) => json) .catch((err) => err) }    I do not want to hard code a username and password into the headers.   Is there a way to use the kv store with session cookies or tokens such as HEC so the user does not have to re-enter their credentials?  
Hi All, Hey I had couple of  fields extracted and most of the field values are Null and contains lesser field value captured in it. Example: raw data  {"name":"X-ABC-ConversationID","value":"7... See more...
Hi All, Hey I had couple of  fields extracted and most of the field values are Null and contains lesser field value captured in it. Example: raw data  {"name":"X-ABC-ConversationID","value":"79xxxxxxxxxxxxxxxxf76"} {"name":"ABC-ConversationID","value":"3xxxxxxxxxxxxxxxxxxxxxb7a"} {"name":"abc-conversationid","value":"cxxxxxxxxxxxxxxxxxee993d"}   Query:  index=xxx sourcetype=xxx:xxx:xxx httpsourcename=xxx | rex field=_raw "\{\"name\"\:\"(xxx|xxx|X)\-(ConversationID|conversationid|xxx)\-?(ConversationID|\")?(\"|,)(\"|,)\"value\"\:\"(?<xxx_ConversationID>[^\"]+)" | table xxx_ConversationID | fillnull value=NULL When I checked the statistic view , I could see that most of them have "NULL" value only few fields are filled with the exact value. I do understand that there is a possibility that only few field might be having the value, but I wanted to check what is the percentage of values captured for that fields, so I followed the below steps. In Splunk, I selected  the verbose mode and field name, executed the query for last 30 days and when checked the Percentage for the selected field I could see the below value  Total Events captured for 30 days is 13,628,581 (All Events). For selected extracted field value 24 Values, 0.732% of events Question: 1) The Percentage of the event is less than 1% , so  whether should I consider this field value for data normalization . 2) Is there a query to find the amount of data captured for the extracted field containing the value alone, excluding the NULL value. I had used the below  query for finding the unique value and their count. index=xxx sourcetype=xxx:xxx:xxx httpsourcename=xxx | rex field=_raw "\{\"name\"\:\"(xxx|xxx|X)\-(ConversationID|conversationid|xxx)\-?(ConversationID|\")?(\"|,)(\"|,)\"value\"\:\"(?<xxx_ConversationID>[^\"]+)" | table xxx_ConversationID | fillnull value=NULL | where xxx_ConversationID!="NULL" | stats values(xxx_ConversationID),count(xxx_ConversationID) But when I want to find for 36 fields, I am not sure how to write a query to check the same, can you guide me on the query please. Thanks in advance.     
I have seen a few regex examples on this and I have used the regex tools online to test my regex to blacklist files that begin with a period (.) yet this example is not working. example of inputs [... See more...
I have seen a few regex examples on this and I have used the regex tools online to test my regex to blacklist files that begin with a period (.) yet this example is not working. example of inputs [monitor:///dir/dir/dir/syslog] index = index sourcetype = sourcetype host_regex=syslog\/(?P<host>.*)\.syslog blacklist = ^\.\S   example filename = .filename.syslog.2021-01-01
What's a good way to find user who logon to RDP with one user account then user another like privilege user account.  I know the event code/id that need to be monitored.  Here are the event code/id... See more...
What's a good way to find user who logon to RDP with one user account then user another like privilege user account.  I know the event code/id that need to be monitored.  Here are the event code/id: EventCode=1146 OR EventCode=1147 OR EventCode=1148 OR EventCode=1149 OR EventCode=4624 OR EventCode=4625 OR EventCode=21 OR EventCode=22 OR EventCode=23 OR EventCode=24 OR EventCode=25 OR EventCode=39 OR EventCode=40 OR EventCode=4778 OR EventCode=4779 OR EventCode=4634 OR EventCode=4647 OR EventCode=9009 Here is the guy who documented the event id/code:  https://ponderthebits.com/2018/02/windows-rdp-related-event-logs-identification-tracking-and-investigation/  Which command would tell the story better?  concurrency vs streamstats vs timechart or is it combination of concurrency and timechart or am I total off?
Can anyone help me im understanding why the notable events are not getting populated on splunk enterprise security. Ive reinstalled the enterprise security app to see if that fixs the problem. But n... See more...
Can anyone help me im understanding why the notable events are not getting populated on splunk enterprise security. Ive reinstalled the enterprise security app to see if that fixs the problem. But no luck. Also ive enabled the corelation searches that are shipped by default by the app. The CS search returns the event result when explicitly searched but when the scheduled toh run no notable events are generated. I manually tired creating a notable events. still i do not see any of the notable events in security posture or other tabs. To validate ive checked the notable index (i.e. index="notable") but even the notable index returns 0 events.I tried all but no luck. Can someone help we you understanding what is causing the issue
- We tried to implement shclustering with splunk-ansible (https://github.com/splunk/splunk-ansible) - But it is not possible to deploy apps from deployer, because scripts do not run initial push (unl... See more...
- We tried to implement shclustering with splunk-ansible (https://github.com/splunk/splunk-ansible) - But it is not possible to deploy apps from deployer, because scripts do not run initial push (unless specyfing splunk.apps_location), they rely on pulling configs from deployer instead Problem is that pulling configs from deployer does not work until first push from deployer. Internally this request to deployer 'GET /services/apps/deploy?output_mode=json HTTP/1.1' fails with 401 status. After the first push, this request is processed correctly with status 200. I tried to reproduce error internally (outside ansible) and managed to do it, can provide you diags with debug logs. I consider either Deployer implementation must be improved or ansible scripts to correctly handle shc. Steps to reproduce 1. Deploy SHCluster, 2. Restart any member of SHCluster, 3. Check its splunkd.log