All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Guys, I have a JSON file for OS type in some cluster like below: { "clusterA": ubuntu, "clusterA": ubuntu, "clusterA": rhel5, "clusterA": sles11, "clusterB": sles11, "clust... See more...
Hi Guys, I have a JSON file for OS type in some cluster like below: { "clusterA": ubuntu, "clusterA": ubuntu, "clusterA": rhel5, "clusterA": sles11, "clusterB": sles11, "clusterB": sles11, "clusterB": ubuntu, "clusterC": centos, "clusterC": ubuntu ... } I'd like sum the OS type for each cluster, like in above sample, 2 ubuntu in clusterA, 1 rhel5 in clusterA etc. Would you please kindly help out? Thank you!
I have used the geostats command to show the number of blackouts and brownouts by country and have set the pie chart color to black. The pop up description that comes when you hoover over a pie chart... See more...
I have used the geostats command to show the number of blackouts and brownouts by country and have set the pie chart color to black. The pop up description that comes when you hoover over a pie chart has a background color of black, is there a way to change this to white so that the field Blackout can be seen better. Note: I have tried to change the dashboard to a dark theme but did not like the way it looked
05-15-2020 09:16:00.244 +0200 ERROR ExecProcessor - message from "python .......app/collect.py -health fvTenant fvAp fvEPg fvAEPg fvBD vzFilter vzEntry vzBrCP fvCtx l3extOut fabricNode" Response too ... See more...
05-15-2020 09:16:00.244 +0200 ERROR ExecProcessor - message from "python .......app/collect.py -health fvTenant fvAp fvEPg fvAEPg fvBD vzFilter vzEntry vzBrCP fvCtx l3extOut fabricNode" Response too big. Need to collect it in pages. Starting collection... Why am i getting this error. Below is the configuration in my inputs.conf [script://...bin/collect.py -health fvTenant fvAp fvEPg fvAEPg fvBD vzFilter vzEntry vzBrCP fvCtx l3extOut fabricNode] disabled = 0 sourcetype = cisco:apic:health index = cisco-aci interval = 21600
Hello everyone! Recently i faced an issue with appinspect and email alert action. I got this message: "Alert name" has specified the `action.email.to` property with a provided value. This shou... See more...
Hello everyone! Recently i faced an issue with appinspect and email alert action. I got this message: "Alert name" has specified the `action.email.to` property with a provided value. This should be left empty or removed. File: default/savedsearches.conf Line Number: 50 However, if I put the email into action.email.cc it works, but I need to have an email address in the "to" option(which is kinda weird on my opinion). In the Splunk dev documentation(https://dev.splunk.com/enterprise/docs/releaseapps/appinspect/appinspectreferencetopics/splunkappinspectcheck/#Saved-search-standards) it's also mentioned: Check that email alerts (action.email.to) set in savedsearches.conf do not have a default value. But my client currently has Splunk Cloud deployment and we want just to put all the alerts into private app for internal usage. So, it there any way to omit this issue? Because it looks like I cannot upload the private app to the Splunk Cloud having alerts with "email.to" option.
Hi team, I need to create a alert, where if my daily count is less than 30 % of monthly count average... of a particular field how is this..
This may be a simple question, but I have been unable to find an answer as of yet. I need to know how to use a Google API key in Maps+. Currently, when I try to do something like Street View, I get a... See more...
This may be a simple question, but I have been unable to find an answer as of yet. I need to know how to use a Google API key in Maps+. Currently, when I try to do something like Street View, I get a pop-up saying : API Key Failure Failed to get API key for user: , realm: undefined - Verify credentials and try again. I have a API key; I just need to know where to put the information so I can get it to work. Any assistance would be appreciated.
Hello, we have connected FMC with 12 Security Gateways to Splunk using estreamer addon installed on HF. Log ingestion works fine, but we have issues with filtering. During log analysis, it tu... See more...
Hello, we have connected FMC with 12 Security Gateways to Splunk using estreamer addon installed on HF. Log ingestion works fine, but we have issues with filtering. During log analysis, it turned out that the order of the fields in DNS logs is not the same in each message, but they can have ~6 versions which cause great pain for the filtering. (we need to filter out internal DNS requests and leave requests for external resources) We were able to create 5 filters but unfortunately, since they are rather heavy Splunk throws errors when we implement 6. Example of such logs rec_type=71 monitor_rule_7=N/A fw_rule_action=Fastpath src_tos=0 dns_resp_id=0 event rec_type=71 netbios_domain="" file_count=0 referenced_host="" monitor_rule_7=N/A monitor_rule_6=N/A fw_rule_action=Fastpath rec_type=71 fw_rule_action=Fastpath dns_rec_id=0 client_app=Unknown event_subtype=1 I would like to ask if there is a way to tackle this problem. Regards, Dawid
Need to remove prefix from json array. I want to remove everything before {"id" {"@odata.context":"https://graph.microsoft.com/v1.0/$metadata#auditLogs/directoryAudits","value":[{"id"
Hello , I have data from 2 diff source with same fields as shown below : index= sourcetype= source= test.txt device_name="alpha" pool_name="a" device_name="beta" pool_name="b" device_name... See more...
Hello , I have data from 2 diff source with same fields as shown below : index= sourcetype= source= test.txt device_name="alpha" pool_name="a" device_name="beta" pool_name="b" device_name="gamma" pool_name="c" index= sourcetype= source=test1.txt device_name="alpha" pool_name="a" device_name="beta" pool_name="b" device_name="gamma" pool_name="z" eval actual_pools = toString(device_name) + ";" + toString(pool_name) I am looking for field actual_pools using raw data which i created above which exist in source=test1.txt but not in source=test.txt Thanks
I have a data source where the log format is the same but one attribute changes for various logs. I want to extract the field name and field value from the log itself.. is it possible? Please find sa... See more...
I have a data source where the log format is the same but one attribute changes for various logs. I want to extract the field name and field value from the log itself.. is it possible? Please find sample logs below May 15 04:29:41 host datasource: "0" "Enterprise Forest" "domain" "field2" "severity" "user" "id" "profileid" "type" "eventid" whencreated=""2019-05-16T08:31:32.0000000Z"" May 15 04:29:41 host datasource: "0" "Enterprise Forest" "domain" "field2" "severity" "user" "id" "profileid" "type" "eventid" pwdlastset=""2019-05-16T08:31:32.0000000Z"" May 15 04:29:41 host datasource: "0" "Enterprise Forest" "domain" "field2" "severity" "user" "id" "profileid" "type" "eventid" badpwdcount="20" May 15 04:29:41 host datasource: "0" "Enterprise Forest" "domain" "field2" "severity" "user" "id" "profileid" "type" "eventid" operatingsystemversion=""6.1 (7601)"" If you notice, the last attribute alone changes for each log. I want to extract the fields like mentioned below. field value whencreated 2019-05-16T08:31:32.0000000Z pwdlastset 2019-05-16T08:31:32.0000000Z badpwdcount 20 operatingsystemversion 6.1 (7601)
We would like to keep a copy of the log files before they get indexed for long term retention which gets downloaded via the API. inputs.conf [scwss-poll] interval = 3600 sourcetype = symantec... See more...
We would like to keep a copy of the log files before they get indexed for long term retention which gets downloaded via the API. inputs.conf [scwss-poll] interval = 3600 sourcetype = symantec:websecurityservice:scwss-poll [batch://$SPLUNK_HOME/var/spool/splunk/...stash_ta_scwss_logs.zip] sourcetype = symantec:websecurityservice:scwss-poll move_policy = sinkhole [batch://$SPLUNK_HOME\var\spool\splunk\...stash_ta_scwss_logs.zip] sourcetype = symantec:websecurityservice:scwss-poll move_policy = sinkhole Maybe something like this? [scwss-poll] interval = 3600 sourcetype = symantec:websecurityservice:scwss-poll [monitor://$SPLUNK_HOME/var/spool/splunk/...stash_ta_scwss_logs.zip] sourcetype = symantec:websecurityservice:scwss-poll [monitor://$SPLUNK_HOME\var\spool\splunk\...stash_ta_scwss_logs.zip] sourcetype = symantec:websecurityservice:scwss-poll
How debug HEC input? To see incoming JSON?
Hello Team, I am using splunk db connect version "Splunk DB Connect splunk_app_db_connect 3.3.1 " and MSSQL 16 with driver used here is "MS-SQL Server Using MS Generic Driver Yes 4.2 ... See more...
Hello Team, I am using splunk db connect version "Splunk DB Connect splunk_app_db_connect 3.3.1 " and MSSQL 16 with driver used here is "MS-SQL Server Using MS Generic Driver Yes 4.2 MS-SQL Server Using MS Generic Driver With Kerberos Authentication Yes 4.2 MS-SQL Server Using MS Generic Driver With Windows Authentication Yes 4.2" but without upset , if i not "Enable Upsert" , then i am getting data but if i enabled it , then no data.
I have multisite environment and I want to monitor all the ssh user commands through .bash_history. for that purpose I enable the monitor:// stanza in all splunk components. interestingly, I am see... See more...
I have multisite environment and I want to monitor all the ssh user commands through .bash_history. for that purpose I enable the monitor:// stanza in all splunk components. interestingly, I am seeing bash_history logs from some servers and majority of the servers are not showing me logs whereas the same configuraiton is across the border. please advise.
Hi, the two scheduled searches "Generate pages - scheduled" and "Generate user sessions - scheduled" aren't scheduled in version 2.2.2 anymore. In previose version 2.1.0 they are still scheduled. ... See more...
Hi, the two scheduled searches "Generate pages - scheduled" and "Generate user sessions - scheduled" aren't scheduled in version 2.2.2 anymore. In previose version 2.1.0 they are still scheduled. Are these scheduled searches no longer needed? Best regards
I am a beginner for Regex and Splunk. I am trying to use regular expression generated during field extraction in online search because I have different sourcetypes. While using the regex i am getting... See more...
I am a beginner for Regex and Splunk. I am trying to use regular expression generated during field extraction in online search because I have different sourcetypes. While using the regex i am getting Mismatched ']'. error. rex "^[^[\n][(?P[^ ]+)[^"\n]"\w+(?P\s+/\w+)"
Hi, I am not able to login to my splunk environment, its showing as 502 Bad Gateway but in the backend server(using linux) I can see the splunk status as running, dnt no why the problem occurrs. ... See more...
Hi, I am not able to login to my splunk environment, its showing as 502 Bad Gateway but in the backend server(using linux) I can see the splunk status as running, dnt no why the problem occurrs. Before the issue was the splunk server will be running but will get 502 Bad Gateway error after 2 days or so, I had to login in backend and restart then it was working fine, but now nothing is working.
Hello there, Try as I might, I've been unable to determine why event breaking using the [oracle:alert:text] sourcetype is not working and was hoping for some help. We're running: 1. Splunk En... See more...
Hello there, Try as I might, I've been unable to determine why event breaking using the [oracle:alert:text] sourcetype is not working and was hoping for some help. We're running: 1. Splunk Enterprise v7.3.4 (we had the same issue when running v7.2, by the way, should anyone point out the published compatibility for this add-on as being an issue). 2. Splunk Add-on for Oracle Database v3.7 without modification on our indexers and search heads. 3. Oracle 12c When installing on the UFs, a monitoring stanza was created in inputs.conf like so: [monitor:///u01/app/oracle*/diag/rdbms///trace/alert_*.log] sourcetype = oracle:alert:text index = ufo_db_audit crcSalt = <SOURCE > In the "Sample Data", below, the events should be brokenby the timestamps in bold. Sample data: Wed May 13 23:35:09 2020 Thread 2 advanced to log sequence 13065 (LGWR switch) Current log# 3 seq# 13065 mem# 0: +COD_DATA/CONTRLMD/ONLINELOG/group_3.635.945444391 Current log# 3 seq# 13065 mem# 1: +COD_FRAD/CONTRLMD/ONLINELOG/group_3.5494.945444391 Wed May 13 23:35:09 2020 Archived Log entry 27352 added for thread 2 sequence 13064 ID 0x56af26b4 dest 1: Thu May 14 00:34:51 2020 Fatal NI connect error 12170. VERSION INFORMATION: TNS for Linux: Version 12.1.0.2.0 - Production Oracle Bequeath NT Protocol Adapter for Linux: Version 12.1.0.2.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 12.1.0.2.0 - Production Time: 14-MAY-2020 00:34:51 Tracing not turned on. Tns error struct: ns main err code: 12535 TNS-12535: TNS:operation timed out ns secondary err code: 12560 nt main err code: 505 TNS-00505: Operation timed out nt secondary err code: 110 nt OS err code: 0 Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=x.x.x.x)(PORT=44570)) Thu May 14 02:00:03 2020 Closing Resource Manager plan via scheduler window Clearing Resource Manager plan via parameter Three of the four events above are broken correctly. The second last one, however, ends up broken into numerous events like this: ======================================================================== Thu May 14 00:34:51 2020 Fatal NI connect error 12170. VERSION INFORMATION: TNS for Linux: Version 12.1.0.2.0 - Production Oracle Bequeath NT Protocol Adapter for Linux: Version 12.1.0.2.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 12.1.0.2.0 - Production ======================================================================== Time: 14-MAY-2020 00:34:51 Tracing not turned on. Tns error struct: **ns main err code: 12535** ======================================================================== TNS-12535: TNS:operation timed out ns secondary err code: 12560 nt main err code: 505 TNS-00505: Operation timed out ** nt secondary err code: 110** ** nt OS err code: 0** ** Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=x.x.x.x)(PORT=44570))** Using the same props attributes, when I ingest the same log file using "Add Data" in Splunk Web or even throw a monitor on a non-prod indexer to ingest the same log file, the events are broken perfectly. Although there seems to be no delay in the events being output to the monitored log files, I tried these attributes without success: multiline_event_extra_waittime= true time_before_close = 90 I can imagine using BREAK_ONLY_BEFORE_DATE=true and SHOULD_LINEMERGE=true might be helpful but I can't imagine I should have to radically alter the props attributes of a Splunk TA like this so presume something else is going on. I'd really appreciate any pointers here.
Hi all, We set sourcetype in inputs.conf on universal forwarder, e.g. [monitor:///Firewall/*/*_pa_firewall.log] ignoreOlderThan=1d disabled = false host_segment = 2 index = network sourcetype... See more...
Hi all, We set sourcetype in inputs.conf on universal forwarder, e.g. [monitor:///Firewall/*/*_pa_firewall.log] ignoreOlderThan=1d disabled = false host_segment = 2 index = network sourcetype = pan:log no_appending_timestamp = true Sourcetype of related logs changed to pan:traffic. Found that it's caused by an add-on defined on indexer that transforms the sourcetype for a matched pattern. Then configuration file on indexer is of higher priority than those on universal forwarder. Is that correct? Thanks a lot. /st wong
1) If splunk can't read a date in certain instances, What troubleshooting I should do? 2) If I've onboarded application logs into splunk and the agent is running, But when I query, I don't get any... See more...
1) If splunk can't read a date in certain instances, What troubleshooting I should do? 2) If I've onboarded application logs into splunk and the agent is running, But when I query, I don't get any result. What can be the causes and how to identify?