All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone I am running into an issue that may be either Splunk or my Kiwi Syslog server, and I am not really sure and the research I am doing is not helping currently. We had a network device... See more...
Hello everyone I am running into an issue that may be either Splunk or my Kiwi Syslog server, and I am not really sure and the research I am doing is not helping currently. We had a network device that was not communicating and sending logs to syslog server but we fixed that and now whenever we view the RAW logs on the server we can see the specific %Port_Security logs that we are trying to have reported directly to splunk. Whenever I run a search query (that worked before a baseline change) I return 0 results. So what I did was change the way I am trying to retrieve these logs so I run a "sourcetype=syslog" host={switch-name}. The switch pops up and contains a number of logs. However, it seems that the most important log that we want (%Port_Security) does not return as a finding. After, running this search I figured there was maybe a problem with the sourcetype so I ran a search that targets the live syslog data with - source={log location} host={switch-name}. The system pops up again. I did not find the port security report inside this search either. I even added a (%Port_Security) on the back end of it.  I reached out to our engineers that provided the tool to us to help fix the issue since they are the ones that provide it and do the back end configuration and troubleshooting but they refuse to help. 
Hi Splunkers, I'm new in the Splunk world. I'm trying for a reporting tasks, to obtain the counting of every Client or server (all asset with splunk deamon) by version of splunk release by Os type.... See more...
Hi Splunkers, I'm new in the Splunk world. I'm trying for a reporting tasks, to obtain the counting of every Client or server (all asset with splunk deamon) by version of splunk release by Os type. Im not familiat with "stats" command. I tryed somthings like this :   index="_internal" sourcetype="splunkd" group=tcpin_connections (os=windows OR os=linux) (version=7* OR version=8*) | table version, os, hostname | dedup hostname | stats count as hostname by version,os   But the results seems to be incorrect. I cant figure it out what i am doing wrong in order to obtain something like this :   Splunk version | os | Hostname_count_result 8.x.x | linux | sum of hostnames 8.x.x | windows | sum of hostnames 7.x.x | linux | sum of hostnames 7.x.x | windows | sum of hostnames   Many thanks for your returns ! Regards
Hey ya, Good day!!! Trying a built a use case scenario for MFA login attempts from unauthorized IPs. Looking out here for support. Any leads much appreciated   BR, PS
Hi all, I am setting dashboard and alert where we are trying to alert if there is missing hosts in splunk for more than 24 hours . I am using below query but getting malformed error when running in... See more...
Hi all, I am setting dashboard and alert where we are trying to alert if there is missing hosts in splunk for more than 24 hours . I am using below query but getting malformed error when running in search although on dashboard its giving result.   | inputlookup data.csv where DECOMMISSIONED=N SUB_ENVIRONMENT!=TEST | fields ACTIVE_DC APP_NAME DATABASE HOST_NAME APP_NAME DB_VERSION DB_ROLE SUB_ENVIRONMENT | eval Reference=ABC | rename HOST_NAME as host | join type=left host [ search index=dbecx source="*audit*" | stats count as SPLEvents latest(_time) as LastSeen by host | eval age=round((now()-LastSeen)/3600,1) | eval Status=case( LastSeen>(now()-(3600*2)),"Low", LastSeen<(now()-(3600*2+1)) AND LastSeen>(now()-(3600*8)) ,"Medium", LastSeen<(now()-(3600*8+1)) AND LastSeen>(now()-(3600*24)),"High", 1=1,"Critical") | convert ctime(LastSeen) timeformat="%d-%m-%Y %H:%M:%S" | eval Reference="SPL"] | fields DB_VERSION DATABASE APP_NAME ACTIVE_DC host Status SPLEvents | rex mode=sed field=host "s/\..*$//g" | fillnull value=Missing Status | fillnull value=Null   Can someone help here
Hey, currently we have successfully integrated pagerduty in splunk which means whenever a splunk alert is triggered a pagerduty alert will be created and shown in our pagerduty service. Now we are ... See more...
Hey, currently we have successfully integrated pagerduty in splunk which means whenever a splunk alert is triggered a pagerduty alert will be created and shown in our pagerduty service. Now we are looking for a way to customize the urgency. All the alerts have "High" urgency in pagerduty per default when the splunk integration creates these alerts and we want to specify that in the custom details here: Tried a few things with adding "urgency" to the json but without any success. Also the documentation is not referencing the urgency anywhere. Does anybody know how to do this? Thanks
Hi All, Is it possible to generate Reports in Excel format from Custom Dashboards and Reports Tab. I am aware from Metric browser we can export report in CSV format, but as it can't be scheduled and... See more...
Hi All, Is it possible to generate Reports in Excel format from Custom Dashboards and Reports Tab. I am aware from Metric browser we can export report in CSV format, but as it can't be scheduled and Health rule Violation details not available in Metric browser , I need to rely on Custom Dashboards.
I have developed Splunk add-on which compatible for splunk enterprise but not for cloud due to one functionality . So I would like keep single build but I want to add condition for based on Enterpris... See more...
I have developed Splunk add-on which compatible for splunk enterprise but not for cloud due to one functionality . So I would like keep single build but I want to add condition for based on Enterprise and Cloud so it can support both Is that case possible?
Can you please suggest the following?   We are looking to delete/update particular indexed data from the splunk programmatically via python add-on code during data import to the Splunk.
03-30-2023 01:56:34.810 -0400 INFO  AutoLoadBalancedConnectionStrategy [15424 TcpOutEloop] - Removing quarantine from idx=10.65.152.88:9997 connid=0 03-30-2023 01:56:34.811 -0400 WARN  TcpOutputFd ... See more...
03-30-2023 01:56:34.810 -0400 INFO  AutoLoadBalancedConnectionStrategy [15424 TcpOutEloop] - Removing quarantine from idx=10.65.152.88:9997 connid=0 03-30-2023 01:56:34.811 -0400 WARN  TcpOutputFd [15424 TcpOutEloop] - Connect to 10.65.152.88:9997 failed. Connection refused 03-30-2023 01:56:34.811 -0400 ERROR TcpOutputFd [15424 TcpOutEloop] - Connection to host=10.65.152.88:9997 failed 03-30-2023 01:56:34.811 -0400 WARN  TcpOutputFd [15424 TcpOutEloop] - Connect to 10.65.152.88:9997 failed. Connection refused   what is the configuration issue at forwarded OR enterprise server level?  
Hi Team, Below is the raw text that has been received into our splunk portal. It has a field called name of the job. {"timestamp": "2023-03-29T04:57:07.366881Z", "level": "INFO", "filename": "spl... See more...
Hi Team, Below is the raw text that has been received into our splunk portal. It has a field called name of the job. {"timestamp": "2023-03-29T04:57:07.366881Z", "level": "INFO", "filename": "splunk_sample_csv.py", "funcName": "main", "lineno": 38, "message": "Dataframe row : {\"_c0\":{\"0\":\"{\",\"1\":\"    \\\"total\\\": 236\",\"2\":\"    \\\"statuses\\\": [\",\"3\":\"        {\",\"4\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"5\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"6\":\"            \\\"count\\\": 0\",\"7\":\"            \\\"name\\\": \\\"BHW_T8841_ANTRAG_RDV\\\"\",\"8\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqvp\\\"\",\"9\":\"        }\",\"10\":\"        {\",\"11\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"12\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"13\":\"            \\\"count\\\": 0\",\"14\":\"            \\\"name\\\": \\\"BHW_T8009_DATEN_EBIS_RDV\\\"\",\"15\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqvi\\\"\",\"16\":\"        }\",\"17\":\"        {\",\"18\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"19\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"20\":\"            \\\"count\\\": 0\",\"21\":\"            \\\"name\\\": \\\"BHW_T5895_AZV_DATEN_RDV\\\"\",\"22\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqvd\\\"\",\"23\":\"        }\",\"24\":\"        {\",\"25\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"26\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"27\":\"            \\\"count\\\": 0\",\"28\":\"            \\\"name\\\": \\\"BHW_T5829_SONDERTILGUNG_RDV\\\"\",\"29\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqv9\\\"\",\"30\":\"        }\",\"31\":\"        {\",\"32\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"33\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"34\":\"            \\\"count\\\": 0\",\"35\":\"            \\\"name\\\": \\\"BHW_T5152_PROLO_ZINSEN_RDV\\\"\",\"36\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqv6\\\"\",\"37\":\"        }\",\"38\":\"        {\",\"39\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"40\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"41\":\"            \\\"count\\\": 0\",\"42\":\"            \\\"name\\\": \\\"BHW_T5149_PROLO_KOND_RDV\\\"\",\"43\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqv1\\\"\",\"44\":\"        }\",\"45\":\"        {\",\"46\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"47\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"48\":\"            \\\"count\\\": 0\",\"49\":\"            \\\"name\\\": \\\"BHW_T5144_ZUT_SALDEN_RDV\\\"\",\"50\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqux\\\"\",\"51\":\"        }\",\"52\":\"        {\",\"53\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"54\":\"                 We need to separate the text after \\\"name\\\":\\\"**********\\\.   We need to separate the ***** text . Please  
Hi, We got a requirement to ingest and monitor the appian application logs from cloud into Splunk. Has anyone worked on it  and can suggest how I can proceed. Please suggest. Thanks.
I have a bunch of saved searches running to produce CSVs in a specific format (which will then be imported to another tool). However I need to automate the end to end process. 1. The Splunk searche... See more...
I have a bunch of saved searches running to produce CSVs in a specific format (which will then be imported to another tool). However I need to automate the end to end process. 1. The Splunk searches are run on a weekly schedule (looking back over the last 7 days) and produces approx. 6 different CSVs. 2. I need to copy or redirect these CSVs to a windows location  3. a scheduled task will run to bundle the CSVs (plus some additional files) into an archive file 4. an import task will run on a schedule using the zip to populate the second tool.   I need help with step 2 - can anybody tell me what options I have? I'm open to a script, an additional switch I can add to the search or pretty much anything that can still allow this process to be automated.   Thanks!
Hi, i have below json data in splunk logs at different places(different rows). All are belongs to the unique id : 123456JKL..  {"Id": "123456JKL", "Table1": "employee", "department": "admin"} {"... See more...
Hi, i have below json data in splunk logs at different places(different rows). All are belongs to the unique id : 123456JKL..  {"Id": "123456JKL", "Table1": "employee", "department": "admin"} {"Id": "123456JKL", "Table2": "salary", "joineddate": "value"} {"Id": "123456JKL", "pay": "{type:"test","name":"jas"}", "joineddate": "value"} i want to show all json data  under same Id in  a single row in splunk dashboard. Need to group by common value  "Id": "123456JKL" Please help here
I received emails from the following email account (teamsplunk@splunk.com), the emails were flagged as spam and are being held by spam protection... I just wanted to know if the source is legitimate ... See more...
I received emails from the following email account (teamsplunk@splunk.com), the emails were flagged as spam and are being held by spam protection... I just wanted to know if the source is legitimate so that I can allow or deny where necessary. p.s. My apologies for the miscategorization I wasn't able to find one applicable. Many thanks in advance! Zaheer
Getting warning for all our forwarders - is there any problem    03-30-2023 05:00:23.265 +0530 INFO AutoLoadBalancedConnectionStrategy [7124 TcpOutEloop] - Connected to idx=10.22.91.231:9997, pset=... See more...
Getting warning for all our forwarders - is there any problem    03-30-2023 05:00:23.265 +0530 INFO AutoLoadBalancedConnectionStrategy [7124 TcpOutEloop] - Connected to idx=10.22.91.231:9997, pset=1, reuse=0. using ACK. 03-30-2023 05:00:25.234 +0530 INFO AutoLoadBalancedConnectionStrategy [7124 TcpOutEloop] - After randomization, current is first in the list. Swapping with last item 03-30-2023 05:00:25.531 +0530 WARN AutoLoadBalancedConnectionStrategy [6916 TcpOutEloop] - Cooked connection to ip=10.22.91.231:9997 timed out 03-30-2023 05:00:25.531 +0530 INFO AutoLoadBalancedConnectionStrategy [7148 TcpOutEloop] - Closing stream for idx=3.81.182.58:9997 03-30-2023 05:00:25.531 +0530 INFO AutoLoadBalancedConnectionStrategy [7148 TcpOutEloop] - Connected to idx=10.22.91.231:9997, pset=5, reuse=0. using ACK. 03-30-2023 05:00:26.109 +0530 WARN AutoLoadBalancedConnectionStrategy [6236 TcpOutEloop] - Cooked connection to ip=10.22.91.231:9997 timed out
Hello community, I have an issue with one forwarder, was working and suddenly stopped sending data to the Indexers. The Splunk services at the UF are running but the data is not sent to the Index... See more...
Hello community, I have an issue with one forwarder, was working and suddenly stopped sending data to the Indexers. The Splunk services at the UF are running but the data is not sent to the Indexers. The internal logs are not sent either. But if I run a restart at the UF the logs are send but almost immediately are stopped again. I have checked the logs but I cannot find a logical reason for this to happen. I have changed the [inputproc] max_fd = <integer>  From 100 to 8192 then I restarted the splunk service. I have checked the ulimits and currently is in 6400.   If I stop and start the service and I check the logs I cannot see any clue about can be happening, these are the logs that appears before the UF stops sending data: 03-30-2023 05:07:26.260 +0200 INFO TailReader [20263 MainTailingThread] - Setting maxFDs to 8192 03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize http_proxy from server.conf for splunkd. Please make sure that the http_proxy property is set as http_proxy=http://host:port in case HTTP proxying needs to be enabled. 03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize http_proxy from server.conf for splunkd. Please make sure that the http_proxy property is set as http_proxy=http://host:port in case HTTP proxying needs to be enabled. 03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize https_proxy from server.conf for splunkd. Please make sure that the https_proxy property is set as https_proxy=http://host:port in case HTTP proxying needs to be enabled. 03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize the proxy_rules setting from server.conf for splunkd. Please provide a valid set of proxy_rules in case HTTP proxying needs to be enabled. 03-30-2023 05:07:31.013 +0200 INFO ProxyConfig [20259 TcpOutEloop] - Failed to initialize the no_proxy setting from server.conf for splunkd. Please provide a valid set of no_proxy rules in case HTTP proxying needs to be enabled. 03-30-2023 05:07:31.674 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead 03-30-2023 05:07:31.675 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead 03-30-2023 05:07:31.675 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead 03-30-2023 05:07:31.675 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead 03-30-2023 05:07:31.685 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead 03-30-2023 05:07:31.685 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead 03-30-2023 05:07:31.685 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead 03-30-2023 05:07:31.685 +0200 WARN TcpOutputProc [20259 TcpOutEloop] - 'sslCertPath' deprecated; use 'clientCert' instead 03-30-2023 05:07:31.695 +0200 INFO AutoLoadBalancedConnectionStrategy [20259 TcpOutEloop] - Will resolve indexer names at 330.000 second interval. 03-30-2023 05:07:36.671 +0200 INFO TailReader [20266 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 03-30-2023 05:07:37.774 +0200 INFO DC:DeploymentClient [20219 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 03-30-2023 05:07:49.774 +0200 INFO DC:DeploymentClient [20219 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected   After certain time I see the message "The TCP output processor has paused the data flow. Forwarding to host_dest..." But I assume this is because the Splunkd is not able of send data.   Do you have any idea about what can be going on?   Thanks in advance
Thanks in Advance, How to read and extract table format logs in splunk? And i need DeviceID as field and with values as  same for all fields   3/29/23 4:56:34.000 AM   29-Mar-2... See more...
Thanks in Advance, How to read and extract table format logs in splunk? And i need DeviceID as field and with values as  same for all fields   3/29/23 4:56:34.000 AM   29-Mar-2023 04:56:34:PM: |Application Disk Space utilization % DeviceID  VolumeName  FreeSpace (Gb)     Total (Gb)  FreePercent  --------        ----------             --------------                ----------         ----------- C:                System              389.45                         475.14               81.97 P:                Offline                389.45                         475.14               81.97   3/29/23 4:56:34.000 AM   29-Mar-2023 04:56:34:PM: |Services Status in Server Status         Name                   DisplayName     ------             ----                         ----------- Stopped     ALG                       Application Layer Gateway Service Running Running       Appinfo               Application Information    
Dear All, Can AppDynamics monitor Oracle Peoplesoft ? I know this application based on java but this is oracle package application that may be hard for us to monitor this whole application. Any sug... See more...
Dear All, Can AppDynamics monitor Oracle Peoplesoft ? I know this application based on java but this is oracle package application that may be hard for us to monitor this whole application. Any suggestion, recommendation or guide for us to monitor Oracle Peoplesoft ? Regards, Ruli
Hi. Lets say there are fields named "raw". The values are like this. http-header1=value1|http-header2=value2.. Number of HTTP Headers is 1 to 4. ex) METHOD=POST|User-Agent=Mozilla|HTTP-CO... See more...
Hi. Lets say there are fields named "raw". The values are like this. http-header1=value1|http-header2=value2.. Number of HTTP Headers is 1 to 4. ex) METHOD=POST|User-Agent=Mozilla|HTTP-CONTENT=img/jpeg I'd like to split this field into multiple fields like this. field | value ----------------------+-------------- raw_http_header1 | value1 raw_http_header2 | value2 ... ex) field | value ----------------------+-------------- raw_METHOD | POST raw_User_Agent | Mozilla raw_HTTP_CONTENT | img/jpeg   ... Notice field name cannot contain "-".
How can I control or force the hostname to be a specific value via inputs.conf? Inputs.conf stanza [monitor:///var/log/*] disabled = 0 index = test_data host = hostname1,hostname2,hostname3,ho... See more...
How can I control or force the hostname to be a specific value via inputs.conf? Inputs.conf stanza [monitor:///var/log/*] disabled = 0 index = test_data host = hostname1,hostname2,hostname3,hostname4 [monitor:///var/adm/*] disabled = 0 index = test_data host = hostname1,hostname2,hostname3,hostname4 [monitor:///etc/*] disabled = 0 index = test_data host = hostname1,hostname2,hostname3,hostname4 I have tried multiple solution Case 1--> added host_regex=<regular expression>  this did not work. Case 2 --> added host= hostname1,hostname2,hostname3,hostname4 ---> this worked for some log file path like .var/log/message host = hostname1 and for some log file path like /var/log/dnf.log I am getting hostname=hostname1,hostname2,hostname3,hostname4 Case 3 --> I tried utilizing the feature where inputs.conf looks like below. But I can not implement this as a solution as the app will be pushed from DS to all UF respectively hence hardcoding using default setting can not be done in my case. inputs.conf [default] host = <hostname1> [monitor:///var/log/*] disabled = 0 index = test_data [monitor:///var/adm/*] disabled = 0 index = test_data [monitor:///etc/*] disabled = 0 index = test_data