All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've created trained a Density Function using data but ONLY want it to output outliers that exceed the upper bound and not below the lower bound. How would I do this? My search: index=my_ind... See more...
I've created trained a Density Function using data but ONLY want it to output outliers that exceed the upper bound and not below the lower bound. How would I do this? My search: index=my_index | bin _time span=1d | stats sum(numerical_feature) as daily_sum by department, _time | apply my_model Currently it is showing all outliers.
Apps under search head under /opt/splunk/etc/apps/ are not replicating to search peers /opt/splunk/var/run/searchpeers/ Here is my setup - I have a standalone search head which has indexers as searc... See more...
Apps under search head under /opt/splunk/etc/apps/ are not replicating to search peers /opt/splunk/var/run/searchpeers/ Here is my setup - I have a standalone search head which has indexers as search peers. I have deployed apps to search head and they are not replicating to search peers.
Hi, i am forwarding fortigate firewalls syslogs to windows universal forwarder and this data is sent to splunk single search head, but the fortigate logs are appearing by there IP, i want to disting... See more...
Hi, i am forwarding fortigate firewalls syslogs to windows universal forwarder and this data is sent to splunk single search head, but the fortigate logs are appearing by there IP, i want to distinguish them by their hostname. I have created the file inputs.conf in c:/programfiles/splunkforwarder/etc/system/local and  i have put the following stanza into it  [udp://514} sourcetype=firewall_logs connection_host= 192.168.1.*, 192.168.1.* (fortigate IP's) host= Both fortigate hostnames in comma seperated values but the hostname is appearing under single hostname
Can i get a Splunk query that shows the last logon date for a group of active directory service account      Thanks 
Hi all, we've a procedure that's writes index only where there's a KO: So I've a sequence of events like these: DATE,RESPONSE 2024/05/24 11:04:00,1 2024/05/24 11:05:00,1 2024/05/24 11:06:00,1 ... See more...
Hi all, we've a procedure that's writes index only where there's a KO: So I've a sequence of events like these: DATE,RESPONSE 2024/05/24 11:04:00,1 2024/05/24 11:05:00,1 2024/05/24 11:06:00,1 2024/05/24 11:08:00,1 2024/05/24 11:09:00,1 2024/05/24 11:10:00,1 2024/05/24 11:11:00,1 2024/05/24 11:13:00,1 2024/05/24 11:14:00,1 As you can se between 2024/05/24 11:06:00 and 2024/05/24 11:08:00 and 2024/05/24 11:11:00 2024/05/24 11:12:00 , there's no a KO What we want do is to produce a full output like this: 2024/05/24 11:04:00,1 2024/05/24 11:05:00,1 2024/05/24 11:06:00,1 2024/05/24 11:07:00,0 2024/05/24 11:08:00,1 2024/05/24 11:09:00,1 2024/05/24 11:10:00,1 2024/05/24 11:11:00,1 2024/05/24 11:12:00,0 2024/05/24 11:13:00,1 2024/05/24 11:14:00,1 In order to highlight the service's up/down. I've tried with a lot of method but I cannot obtain a similiar result.   Any suggestion ?   Thanks Fabrizio
I want to migrate my clustered environment from one Linux to another. Is it possible to migrate search head and deployment server first and then the indexers on the other day? CentOS and the new di... See more...
I want to migrate my clustered environment from one Linux to another. Is it possible to migrate search head and deployment server first and then the indexers on the other day? CentOS and the new distro is RHEL? Any Ideas or suggestions?
Hi all, I have table where the values are showing as 234.000000 56.000000 But we want to remove zeros and shown only 234 56 How we do this???
I am generating alarms by acquiring abnormal values for CPU usage of NW devices. I would like to send these alarms via email or webhook, but I get the above error and cannot send them. What is the ... See more...
I am generating alarms by acquiring abnormal values for CPU usage of NW devices. I would like to send these alarms via email or webhook, but I get the above error and cannot send them. What is the cause? Error in 'sendalert' command: Alert script returned error code 2.
We are receiving some notables that reference an encoded command being used with PowerShell, and the notable lists the command in question. The issue is that the command it is listing appears to be i... See more...
We are receiving some notables that reference an encoded command being used with PowerShell, and the notable lists the command in question. The issue is that the command it is listing appears to be incomplete when we decode the string. Does anyone know a way for us to potentially hunt down and figure out what the full encoded command referenced in the notable may be?
Hi All,  I have a splunk query returning output as: STime 09:45   I want to convert it to hours. Expected output: STime 9.75 hrs   How do I achieve this using splunk
After configuring my indexer and forwarder to use SSL I receive the following error: Error encountered for connection from src=MY_IP:44978. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown... See more...
After configuring my indexer and forwarder to use SSL I receive the following error: Error encountered for connection from src=MY_IP:44978. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol output.conf on  forwarder: [tcpout:group1] server = INDEXER_IP:9998 disabled = 0 sslVerifyServerCert = true useClientSSLCompression = true inputs.conf on indexer: [splunktcp-ssl:9998] disabled = 0 connection_host = ip [SSL] serverCert = /opt/splunk/etc/auth/mycerts/my_prepared_cert.pem requireClientCert = false output of openssl s_client -connect INDEXER_IP:9998 SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: 4E137F80E8629FC675460A5B2A5E13305F5DE4153720F7A2566A7ED2490EF77C Session-ID-ctx: Master-Key: 7AD057B736D12AD4CA0515CF7E7AE9BDB1BB45A05F75DA6042A1A5460110D886BB80BEE06A79CFE94428D33A51B76009 Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None TLS session ticket lifetime hint: 300 (seconds) TLS session ticket: 0000 - e4 37 a8 12 91 c0 0c a0-6e 1b c5 01 31 98 3f 80 .7......n...1.?. 0010 - 95 9b 8d 47 c5 a3 99 33-49 2a f0 86 7f 80 e8 2c ...G...3I*....., 0020 - b7 4e 80 23 ec 4e 0e c6-20 b5 70 9c f9 cd 7d bd .N.#.N.. .p...}. 0030 - 69 93 82 ec 9d 37 51 ba-47 8e a6 23 cb 51 7f 4e i....7Q.G..#.Q.N 0040 - 1f 59 8b 8b 06 c4 dc 23-f9 64 61 69 ea e3 c3 39 .Y.....#.dai...9 0050 - 79 eb 82 a2 5c 0c 28 32-a1 2a a5 a8 50 41 95 54 y...\.(2.*..PA.T 0060 - 5a f6 6d 53 cd 12 d3 34-fe 18 00 50 e0 06 2c 77 Z.mS...4...P..,w 0070 - 0f b9 35 03 a5 08 a2 df-88 23 39 c8 8e b5 81 67 ..5......#9....g 0080 - 71 c1 4e 7a ab 8f b8 36-59 1a 01 ae 7e a6 36 c0 q.Nz...6Y...~.6. 0090 - 5e c2 6e 4f 1d 9f 47 76-cc 38 0e a5 26 91 50 de ^.nO..Gv.8..&.P. Start Time: 1716539462 Timeout : 300 (sec) Verify return code: 0 (ok)  
Hi All,   I am trying to rename a data but it is giving me error. I am doing in this way. | rename "Data Time series* *errorcount=0" AS "Success"  but error is : Error in 'rename' command: Wildca... See more...
Hi All,   I am trying to rename a data but it is giving me error. I am doing in this way. | rename "Data Time series* *errorcount=0" AS "Success"  but error is : Error in 'rename' command: Wildcard mismatch: 'Data Time series* *errorcount=0' as 'Success'.   Log file: Data Time series :: DataTimeSeries{requestId='482-fd1e-47-49-bf9b99f8', errorcount=0,   Can you please help me with correct rename command.
Hi I have the table x, y1, y2 and plot them in the line chart. how can I find the value where the two lines cross ?  
{"body":"2024-04-29T20:25:08.175779 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XX Logon Failed: Anonymous\n2024-04-29T20:25:10.190339 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah... See more...
{"body":"2024-04-29T20:25:08.175779 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XX Logon Failed: Anonymous\n2024-04-29T20:25:10.190339 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-29T20:25:10.241220 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-29T20:25:10.342343 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n","x-opt-sequence-number-epoch":-1,"x-opt-sequence-number":1599,"x-opt-offset":"3642132344","x-opt-enqueued-time":1714422318556} {"body":"2024-04-24T12:46:29.292880 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:34.634829 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Failed: Anonymous\n2024-04-24T12:46:34.651499 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:34.653643 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Failed: Anonymous\n2024-04-24T12:46:34.662636 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:34.712475 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:34.723543 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:36.403615 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Failed: Anonymous\n","x-opt-sequence-number-epoch":-1,"x-opt-sequence-number":156626,"x-opt-offset":"3560527888816","x-opt-enqueued-time":1713962799368} {"body":"2024-04-24T01:04:30.375693 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Failed: Anonymous\n2024-04-24T01:04:35.034067 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n","x-opt-sequence-number-epoch":-1,"x-opt-sequence-number":156,"x-opt-offset":"355193796","x-opt-enqueued-time":171392067}     I have pasted my raw log samples in the above space. Can someone please help me to break these into multiple evnts using props.conf I wish to break the lines before each timestamp (highlighted).   Thanks, Ranjitha
Hi All, I am using transaction command to group events and get stop time of a device.  | transaction sys_id startswith="START" endswith="STOP" | eval stop_time=strftime(mvindex(sys_time,1), "%Y-... See more...
Hi All, I am using transaction command to group events and get stop time of a device.  | transaction sys_id startswith="START" endswith="STOP" | eval stop_time=strftime(mvindex(sys_time,1), "%Y-%m-%d %H:%M:%S.%2N") | table sys_id stop_time However, when a field has same value for startswith and endswith, (for example, sys_time is same for both) then, mvindex(sys_time,1) is empty whereas mvindex(sys_time,0) gives the value.  If the values are different, then it works fine. Does anyone have any idea on this behavior and on how to work around this to get the value regardless?
Hello, Ive been trying to set up a script to run every 5 minutes with cronjob in a CentOS enviorement Heres the script and its configuration using cron   input.conf This configura... See more...
Hello, Ive been trying to set up a script to run every 5 minutes with cronjob in a CentOS enviorement Heres the script and its configuration using cron   input.conf This configuration is in my machine 3, configurated as UF Already created system_metrics index in my Indexer GUI When I try to search for "index=system_metrics sourcetype=linux_performance" in my machine 2 GUI configurated as SH theres no data, can someone help me or give me some instructions please? Thanks!
Configured the otelcol-contrib  agent.config.yaml file to send the data to splunk cloud. i'm getting the data but the source is coming as HEC Token name.  receivers is configured to read different f... See more...
Configured the otelcol-contrib  agent.config.yaml file to send the data to splunk cloud. i'm getting the data but the source is coming as HEC Token name.  receivers is configured to read different files.  filelog/sys: include: [ /var/log/messages, /var/log/auth, /var/log/mesg$$, /var/log/cron$$, /var/log/acpid$$ ] start_at: beginning include_file_path: true include_file_name: false   Exporters:  in the exporters didn't mention the source. option 1 : By default Splunk is taking the HEC token name as it's value. option 2 : I can give the value as log file path but for multiple files, it's not working .    in splunk cloud, source is - HEC Token value                       log_file_path field is giving the log file path.  Is there a way i can configure the source to take the log file path.  
One of my customers is using a tool with a rest API available via SAP ALM Analytics API Ref. https://api.sap.com/api/CALM_ANALYTICS/overview They are looking to get data from the API into a Splu... See more...
One of my customers is using a tool with a rest API available via SAP ALM Analytics API Ref. https://api.sap.com/api/CALM_ANALYTICS/overview They are looking to get data from the API into a Splunk Index, so we suggest having an intermediary application (like a Scheduled Function) to get data from SAP and send it to Splunk using an HEC Token. Is it possible to use something at Splunk directly to pull the data from 3rd party? Or is the suggested approach a good idea to go?  
I am calling the trace endpoint https://ingest.<realm>.signalfx.com/v2/trace/signalfxv1 and sending this span in the body: [ { "id": "003cfb6642471ba4", "traceId": "0025ecb5dc31498b931bce60be07... See more...
I am calling the trace endpoint https://ingest.<realm>.signalfx.com/v2/trace/signalfxv1 and sending this span in the body: [ { "id": "003cfb6642471ba4", "traceId": "0025ecb5dc31498b931bce60be0784cd", "name": "reloadoptionsmanager", "timestamp": 1716477080494000, "kind": "SERVER", "remoteEndpoint": { "serviceName": "XXXXXXX" }, "Tags": { "data": "LogData", "eventId": "0", "severity": "8" } } ] The request receives a 200 response. The response body is OK. However the span does not appear in APM. timestamp is the the number of microseconds since 1/1/1970.  
We have a contractor installing a Splunk instance for us.  For search heads, we have an NVMe volume mounted for the /opt/splunk/var/run folder.  The ./run folder is owned by 'root' and the 'splunk' u... See more...
We have a contractor installing a Splunk instance for us.  For search heads, we have an NVMe volume mounted for the /opt/splunk/var/run folder.  The ./run folder is owned by 'root' and the 'splunk' user cannot write into the folder. Similar, our indexers have a mounted NVMe volume for the /opt/splunk/var/lib folder and it too is owned by 'root'.  Index folders and files are located one level below that in the ./lib/splunk folder where the 'splunk' user is the owner. What are the consequences of having 'root' own these folders on the operation of Splunk?  I assumed that when Splunk is running as non-root, that it must be the owner of all folders and files from /opt/splunk on down.  Am I wrong?