All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a field called lookup_key that contains either a host name or an IP address.  I am trying to get a lookup on the IPs against a host table, and output them to a new field called host1.  If the ... See more...
I have a field called lookup_key that contains either a host name or an IP address.  I am trying to get a lookup on the IPs against a host table, and output them to a new field called host1.  If the lookup_key field is already a host name just copy it to the new field. The address.csv has IPs in data1 and hosts in data2.  Here is where i am currently, any help is appreciated.   | eval lookup_key = if(match(lookup_key, "^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$"), "|lookup address1.csv data1 as lookup_key OUTPUT data2 as host1 ", "lookup_key=host1")  
Hello I'm trying to install a web certificate for port 8089, I don't know what I'm doing wrong. There are already 3 scans and the vulnerability continues to appear. Someone who has already solved ... See more...
Hello I'm trying to install a web certificate for port 8089, I don't know what I'm doing wrong. There are already 3 scans and the vulnerability continues to appear. Someone who has already solved it This is the stanza I have in the web.conf file [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/certificate.key serverCert = /opt/splunk/etc/auth/mycerts/certificate.pem  
Hi All! Can anyone help with these questions below? How to calculate the total vpn connection time per user? The Duration field is of type string. "Jul 10 07:14:17 xxx.xxx.xxx.xx %ASA-4-113019: G... See more...
Hi All! Can anyone help with these questions below? How to calculate the total vpn connection time per user? The Duration field is of type string. "Jul 10 07:14:17 xxx.xxx.xxx.xx %ASA-4-113019: Group = XYZ-SSL, Username = zya3, IP = xxx.xxx.xxx.xx, Session disconnected. Session Type: AnyConnect-Parent, Duration: 0h:41m:42s, Bytes xmt: 27921408, Bytes rcv: 4612882, Reason: User Requested Jul 9 23:55:49 xxx.xxx.xxx.xx %ASA-4-113019: Group = XYZ-SSL, Username = zya3, IP = xxx.xxx.xxx.xx, Session disconnected. Session Type: SSL, Duration: 0h:11m:46s, Bytes xmt: 13452434, Bytes rcv: 5072740, Reason: User Requested Jul 9 21:36:12 xxx.xxx.xxx.xx %ASA-4-113019: Group = XYZ-SSL, Username = zzw2, IP = xxx.xxx.xxx.xx, Session disconnected. Session Type: SSL, Duration: 14h:38m:38s, Bytes xmt: 487160561, Bytes rcv: 39385026, Reason: User Requested"   - Through the eval command you can concatenate the corresponding fields of the numeric type. | transaction user endswith="duration:" keepevicted=true| eval full_duration = duration_hour.":".duration_minute.":".duration_second @woodcock you are the man" Thanks
Hi, We are using Splunk Enterprise 8.0.4.1 with a Search head  and two indexing cluster. As a splunk administrator, I am  getting a "Server error" certain searches and unable to saving a report or ... See more...
Hi, We are using Splunk Enterprise 8.0.4.1 with a Search head  and two indexing cluster. As a splunk administrator, I am  getting a "Server error" certain searches and unable to saving a report or alert for any search. 
We are currently running the "Server" version of Confluence in our environment. This version doesn't actually store audit logs locally to a directory. Instead, the logs are only visible through the U... See more...
We are currently running the "Server" version of Confluence in our environment. This version doesn't actually store audit logs locally to a directory. Instead, the logs are only visible through the UI and can be exported from there with a max of 100k results. In that case, how would one be able to get these audit logs sent to Splunk in a programmatic manner rather than manually downloading the logs and uploading to Splunk on a periodic basis. Here is a page which talks about Confluence audit logging and how it is lacking in capability for the "Server" version. The "Data Center" version, which we don't have, logs locally and can easily be sent over to Splunk via UF. https://confluence.atlassian.com/doc/auditing-in-confluence-829076528.html
tl;dr: what are the initial, default contents of /opt/splunk/etc/deployment-apps/Splunk_TA_windows/local/inputs.conf as it ships with "Splunk_TA_windows" - if it exists and not empty? Reason I ask: ... See more...
tl;dr: what are the initial, default contents of /opt/splunk/etc/deployment-apps/Splunk_TA_windows/local/inputs.conf as it ships with "Splunk_TA_windows" - if it exists and not empty? Reason I ask: it does not exist in my instance on the Deployment Server (only apps.conf in that folder); I am trying to figure out what it should be and how to fix what seems to be a broken "Splunk Add-on for Microsoft Windows" ("Splunk_TA_windows") in an inherited Splunk instance. The TA doesn't seem to be gathering any data, and produces errors such: ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winhostinfo.exe"" splunk-winhostinfo - Found a invalid type named 'application' in stanza WinHostMon://Application, this will not be processed. (i.e. the TA can't find the executables or scripts it needs.) I suspect this is due to someone merging the TA's own inputs.conf into a single master inputs.conf (/opt/splunk/etc/deployment-apps/_server_app_Windows_Clients/local/inputs.conf on the Deployment Server) and then deleting it - which seems to have broken things. Thanks! P.S. Apologies for the formatting - for some reason "Insert/Edit code sample" buttons don't work for me.
I am an admin user in the Splunk console on prem, and I was going to update the roles of certain admin users from admin down to power.   The issue is that whenever I attempt to do this  it silently f... See more...
I am an admin user in the Splunk console on prem, and I was going to update the roles of certain admin users from admin down to power.   The issue is that whenever I attempt to do this  it silently fails.  I click save and all is well but when I refresh the console they are still admin.   We are authenticating with our AD accounts.   I am able to change the Role capabilities, but when I attempt to downgrade a user from admin to power there is not even an error message with feedback saying what happened to the operation.    Any ideas?
This doesn't seem to work...  we've followed the instructions provided with the TA, but we're getting errors output from the scripts to the effect of basic tokens missing.  Also reaching out to Okta ... See more...
This doesn't seem to work...  we've followed the instructions provided with the TA, but we're getting errors output from the scripts to the effect of basic tokens missing.  Also reaching out to Okta support directly.     2020-07-10 15:33:13,487 ERROR pid=21467 tid=MainThread file=setup_util.py:log_error:110 | Credential account with username <our okta> can not be found   Yeah, we have this configured.     2020-07-10 15:33:13,487 DEBUG pid=21467 tid=MainThread file=cim_actions.py:message:424 | sendmodaction - worker="$HOSTNAME" signature="_okta_client Invoked with a url of: https://<our okta>/api/v1/groups/<group>/users/<user>" action_name="oktaGroupMemberChange" search_name="<search name>" sid="scheduler__admin_VEEtT2t0YV9JZGVudGl0eV9DbG91ZF9mb3JfU3BsdW5r__RMD5784129dd80607623_at_1594409520_68739" rid="6" app="TA-Okta_Identity_Cloud_for_Splunk" user="admin" digest_mode="0" action_mode="saved"   OK, seems normal to me.  It attempts the API call, but what does cim_actions have to do with it?  Yes, we have CIM installed, and the add-on is good for all versions.     2020-07-10 15:33:13,487 ERROR pid=21467 tid=MainThread file=cim_actions.py:message:424 | sendmodaction - worker="$HOSTNAME" signature="Error: 'NoneType' object has no attribute '__getitem__'. Please double check spelling and also verify that a compatible version of Splunk_SA_CIM is installed." action_name="oktaGroupMemberChange" search_name="<search name>" sid="scheduler__admin_VEEtT2t0YV9JZGVudGl0eV9DbG91ZF9mb3JfU3BsdW5r__RMD5784129dd80607623_at_1594409520_68739" rid="6" app="TA-Okta_Identity_Cloud_for_Splunk" user="admin" digest_mode="0" action_mode="saved" action_status="failure"   NoneType has no attribute.  Even more vague.     2020-07-10 15:33:14,370 INFO pid=21898 tid=MainThread file=cim_actions.py:message:424 | sendmodaction - worker="$HOSTNAME" signature="Invoking modular action" action_name="oktaGroupMemberChange" search_name="<search name>" sid="scheduler__admin_VEEtT2t0YV9JZGVudGl0eV9DbG91ZF9mb3JfU3BsdW5r__RMD5784129dd80607623_at_1594409580_68741" rid="1" app="TA-Okta_Identity_Cloud_for_Splunk" user="admin" digest_mode="0" action_mode="saved"   Then it goes ahead and tries to call the modular action anyway.     07-10-2020 15:39:23.653 -0400 ERROR SearchScheduler - Error in 'sendalert' command: Alert script returned error code 4., search='sendalert oktaGroupMemberChange results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__admin_VEEtT2t0YV9JZGVudGl0eV9DbG91ZF9mb3JfU3BsdW5r__RMD5784129dd80607623_at_1594409940_68784/per_result_alert/tmp_1.csv.gz" results_link="https://<our search head>/app/TA-Okta_Identity_Cloud_for_Splunk/search?q=%7Cloadjob%20scheduler__admin_VEEtT2t0YV9JZGVudGl0eV9DbG91ZF9mb3JfU3BsdW5r__RMD5784129dd80607623_at_1594409940_68784%20%7C%20head%202%20%7C%20tail%201&earliest=0&latest=now"'   Error code 4...  nothing more than that.  The part of the script where that error is thrown is related to gathering parameters.  I suspect that maybe this is implemented, but never tested or confirmed to work.  But I could be wrong...
Hello I have a problem with some .sqlaudit files These files are being stored in the following path Z: \ audit \ Install a forwarder but Splunk doesn't seem to recognize these files. Use the Splun... See more...
Hello I have a problem with some .sqlaudit files These files are being stored in the following path Z: \ audit \ Install a forwarder but Splunk doesn't seem to recognize these files. Use the Splunk app add-on for SQL Servers, and only be logs of Performance. Does anyone know how I can get the .sqludit files?
Hi ,  I have installed this Splunk add-on for aws on my local machine with Splunk free license and version 8.0.3. The problem is this add-on only shows the loading thing and does not display an... See more...
Hi ,  I have installed this Splunk add-on for aws on my local machine with Splunk free license and version 8.0.3. The problem is this add-on only shows the loading thing and does not display any tab. As per the documentation i have first installed lower version of add on 4.6 then disable its input and upgrade it to 5.0 and then to 5.1 . But still it did not worked. Even followed the answers in which it was mentioned to give a BOTO_CONFIG attribute in splunk-launch.conf. Please help.
how much storage we can save by enabling frozen bucket and roll over data from cold bucket. We have tons of data coming in and instead of storing data in cold db for 1 year ,we want to store it in c... See more...
how much storage we can save by enabling frozen bucket and roll over data from cold bucket. We have tons of data coming in and instead of storing data in cold db for 1 year ,we want to store it in cold db for 3 months max and older than that we will move to frozen bucket , but by enabling that we would like to understand how much storage we can save if we store 9 months data in frozen and retrieve it whenever required. Example: you can take 3 TB coming in per day ,we have 14 indexers ,multi site cluster (R.F-2, S.F-1). We would like to keep data till 3 months (max) in hot\warm\cold DB then we would store that frozen DB. what would be compression factor if we do that , is it gonna save some storage or it will consume same storage that
Having an issue with Enterprise Security and Search Driven Lookup.  I've created one with manual settings, and enabled it in search, and it is reporting as running.  But it does not see to create a l... See more...
Having an issue with Enterprise Security and Search Driven Lookup.  I've created one with manual settings, and enabled it in search, and it is reporting as running.  But it does not see to create a lookup.  When you use the manual setting, do you have to include the outputlookup command?  I was thinking that you did not because you define your lookup name before you get to inputting your search.  
Hi Team, I want to monitor LDAP server. Does anyone have LDAP extension available? Thanks, Shahid
Hi all I face a special problem on starting stream forwarder as a  service on Ubuntu 18.04 (as dedicated mode) and it can not start unless I use this command: /opt/streamfwd/bin/streamfwd -D   Us... See more...
Hi all I face a special problem on starting stream forwarder as a  service on Ubuntu 18.04 (as dedicated mode) and it can not start unless I use this command: /opt/streamfwd/bin/streamfwd -D   Using other start methods, I receive this error on streamfwd.log: 2020-07-10 20:02:11 INFO [140564408074944] (CaptureServer.cpp:452) stream.CaptureServer - Launch child process for dedicated capture mode 2020-07-10 20:02:11 INFO [139766768343360] (CaptureServer.cpp:490) stream.CaptureServer - Launch child process for restoring interfaces 2020-07-10 20:02:11 INFO [139766768343360] (CaptureServer.cpp:816) stream.CaptureServer - Found DataDirectory: /opt/streamfwd/data 2020-07-10 20:02:11 INFO [139766768343360] (CaptureServer.cpp:822) stream.CaptureServer - Found UIDirectory: /opt/streamfwd/ui 2020-07-10 20:02:11 ERROR [139766768343360] (SnifferReactor/DpdkNetworkCapture.cpp:1308) stream.NetworkCapture - Error: basic_string::_S_construct null not valid 2020-07-10 20:02:11 FATAL [139766768343360] (main.cpp:1150) stream.main - Failed to start streamfwd, the process will be terminated: DPDK failed to initialize 2020-07-10 20:02:11 INFO [140041836300608] (CaptureServer.cpp:622) stream.CaptureServer - kernel interfaces restored   Have you any idea for resolving this problem?   TNX
Hi I'm trying to make java exceptions easier to read in a dashboard. The data is in JSON format and I have a search which formats the data into a table. But the exceptions are just too big to be dis... See more...
Hi I'm trying to make java exceptions easier to read in a dashboard. The data is in JSON format and I have a search which formats the data into a table. But the exceptions are just too big to be displayed for all events at the same time. So what I would like to do is, display the first 3 columns in the dashboard so I can identify the relevant event. Then once the event is clicked on, it expands to show more columns which contain the exception details. Any assistance would be greatly appreciated.
Is there a way to set the maximum cluster size for the clusters generated by the "cluster" command?
I've found that for Splunk Enterprise, there is the Securing Splunk Enterprise document, outlining recommended security configurations. Does a similar document exist for Splunk Cloud to ensure cust... See more...
I've found that for Splunk Enterprise, there is the Securing Splunk Enterprise document, outlining recommended security configurations. Does a similar document exist for Splunk Cloud to ensure customers are taking the necessary actions for security?    
I'd like to display stats based on a custom string within a log entry.  Below is sample of the log entry.  I'd like to parse the unique entries seen after "The following DAP records were selected for... See more...
I'd like to display stats based on a custom string within a log entry.  Below is sample of the log entry.  I'd like to parse the unique entries seen after "The following DAP records were selected for this connection:" string.  If possible use the stats by .... method so it displays a unique entry with the amount of times it's been seen.   So in the case of the 2 entries below, the stats would have TEST_AUTOMATION_VENDOR, and TEST2_AUTOMATION_VENDOR with a count next to it. I can do this for VPN users quite easily, but can't figure out how to do it for unique results of a string.   I only know the basics of splunk search syntax so hopefully I'm explaining this clearly.   %ASA-dap-6-734001: DAP: User TESTUSER, Addr 10.10.10.10, Connection AnyConnect: The following DAP records were selected for this connection: TEST_AUTOMATION_VENDOR %ASA-dap-6-734001: DAP: User TESTUSER2, Addr 12.12.12.12, Connection AnyConnect: The following DAP records were selected for this connection: TEST2_AUTOMATION_VENDOR
Hello everyone, When a user visits a website, it can make hundreds of separate requests related to advertising. So i want to exclude all these logs and keep only the logs with 'real visits' to urlc:... See more...
Hello everyone, When a user visits a website, it can make hundreds of separate requests related to advertising. So i want to exclude all these logs and keep only the logs with 'real visits' to urlc: online shopping. To be able to measure that, I came up with the idea of searching only the logs where user logged into his account. In which field(s) I can find this type of information? with keywords in url?  Mtg?  Status? Mt? rule? Connect_protocol?  http_method? Thanks
At my organization, we're planning to ingest about 100 GB/day, and leveraging 1 Heavy Forwarder to pull the following data sources, and sending those over to our index cluster: Oracle Database Sta... See more...
At my organization, we're planning to ingest about 100 GB/day, and leveraging 1 Heavy Forwarder to pull the following data sources, and sending those over to our index cluster: Oracle Database Standard and Fine-Grained audit logs Oracle WAF logs (via HTTP Event Collector which shall be configured on the HF) Qualys Vulnerability Management Data We are estimating that these data sources shall probably account for close to 30 GB/day in total, and are using Splunk ES in our environment. Any recommendations on CPU and RAM specs? So far, we have a server where the HF shall be installed with 8 CPU and 32 GB RAM. Is that enough or should we scale down/up?