All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We upgraded to Splunk Enterprise v9.0.2 yesterday and have subsequently hit an issue with an integration with our ServiceNow platform for raising Cases and Incidents. We have a custom "Alert Action"... See more...
We upgraded to Splunk Enterprise v9.0.2 yesterday and have subsequently hit an issue with an integration with our ServiceNow platform for raising Cases and Incidents. We have a custom "Alert Action" which uses the python "requests" module. (Wondering if the removal of Python2 has had an impact on our previously working script and "import requests") Anyway, we seem to be getting SSL: UNKNOWN_PROTOCOL Just asking if anyone has had something similar happen. Pulling hair out to trying to make sense of the SSL configuration settings. We're issuing a log statement before and after the request.response and it is not reaching the "after", like a silent error. Any pointers would be appreciated.
I'm trying to use the streamstats-command with time_window to track when certain user actions happen more than twice in a span of an hour. My search is like this ("dedup _time" because we get duplica... See more...
I'm trying to use the streamstats-command with time_window to track when certain user actions happen more than twice in a span of an hour. My search is like this ("dedup _time" because we get duplicate rows) <search> | sort _time | fields _time user | dedup _time | streamstats time_window=60min count as amount by user | where amount > 2 | table _time user amount This search works correctly otherwise, but the problem is that it triggers multiple times when the user action happens more than twice in an hour. So I get results like: 2022-01-12 16:04:56.482 username1 3 2022-01-12 16:07:58.525 username1 4 2022-01-12 16:13:16.137 username1 5 2022-01-12 16:14:30.255 username1 6 So how can I get only the largest result (in this case 6) in a sequence like this? I can't use "dedup user" because the alert may trigger for the same user at some other time as well, and that should be reported as its own case. Any help is greatly appreciated.
Hi Team, We have SPlunk Cloud Victoria, We have 2 SH's (Core SH & ES SH) We have installed MS Cloud Service Add-on on Core SH and it is automatically reflecting on ES SH but we have configured inpu... See more...
Hi Team, We have SPlunk Cloud Victoria, We have 2 SH's (Core SH & ES SH) We have installed MS Cloud Service Add-on on Core SH and it is automatically reflecting on ES SH but we have configured input in this Add-on on Core SH but it is not reflecting on ES SH. 1. So Input(MSCS-Addon) configuration also reflecting in both SH's if we configured only on one SH's? 2. If not then we configured input(MSCS-Addon) on both SH's is it possible to get duplicate data?
Hello everyone I am using the DSDL app: https://splunkbase.splunk.com/app/4607 The model I use is sklearn's kmeans: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html ... See more...
Hello everyone I am using the DSDL app: https://splunkbase.splunk.com/app/4607 The model I use is sklearn's kmeans: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html My goal is to cluster a dataset with kmeans and then assign new observations to center points form my kmeans model. I built the model and everything compiled just fine. I then wanted to check my model for possible logical errors. To this end, I use the same dataset in the | fit and | apply command: | inputlookup wineventlog.csv | fit MLTKContainer algo=Pipeline_V3 k=5 fe_* into app:pipeline_v3 | inputlookup wineventlog.csv | apply pipeline_v3 To my understanding, this should yield the exact same results. However, it does not. | fit creates 5 clusters and assigns observations to each cluster. | apply on the other hand only assigns observations to 3 out of the 5 clusters. Does anyone have a precise idea what goes on behind the scenes with | fit and | apply? I already checked the documentation that outlines what they both do: https://docs.splunk.com/Documentation/MLApp/5.3.3/User/Understandfitandapply I thoroughly checked whether I have null values, non-numeric fields that might get converted, etc. but I could not figure out why fit & apply wouldn't yield the same result. Following the code I use for each respective command. def fit(model, df, param): # Number of clusters k = int(param["options"]["params"]["k"]) # Fit kmeans model kmeans = KMeans(n_clusters=k, random_state=0).fit(df) model["kmeans"] = kmeans return model def apply(model, df, param): # Assign new observations to kmeans centers predictions = model["kmeans"].predict(df) return predictions
Here's the weirdest piece of error I've ever seen. When I run the following code snippet I get a syntax error: line 1, column 0. But when I change the port to anything else, the code is running. ... See more...
Here's the weirdest piece of error I've ever seen. When I run the following code snippet I get a syntax error: line 1, column 0. But when I change the port to anything else, the code is running. Any help? import os import splunklib.client as client from dotenv import load_dotenv def connect_to_splunk(username, password, host='splunk.my_company.biz', scheme='https' try: service = client.connect(username=username, password=password, host=host, scheme=scheme, port='443') if service: print("Splunk service created successfully") except Exception as e: print(e)
index=XX sourcetype=YY source=*/log/abc.log | dedup _time, bppm_message, bppm_nodename sortby -_indextime | rex field=bppm_operations_annotations "0x[a-z0-9]{8},(?<onepass>\w+),,(OPERATOR|OVERRIDE)... See more...
index=XX sourcetype=YY source=*/log/abc.log | dedup _time, bppm_message, bppm_nodename sortby -_indextime | rex field=bppm_operations_annotations "0x[a-z0-9]{8},(?<onepass>\w+),,(OPERATOR|OVERRIDE)_CLOSED" | rex field=bppm_operations_annotations "(OPERATOR_CLOSED:|OVERRIDE_CLOSED:|OWNERSHIP_TAKEN:)\s+(?<closed_with>(\w|\s|\-|\/|\.|\[|\])+)," | rex field=bppm_annotations "0x[a-z0-9]{8},[a-zA-Z0-9]+,(?<INC_CREATED_TSIM>.*)" | rex field=bppm_annotations "[0-9]{3}[A-Z0-9]{5},[a-zA-Z0-9]+,(?<MUTIPLE_CLOSURE_ANNOTATION>.*)" | rex field=bppm_annotations "0x[a-z0-9]{8},CME-remedy.mrl:execute AddIncidentToNotes,Incident (?<INC_CREATED_TSIM_2>(\w|\s|\-|\/|\.|\[|\])+) created by" | eval bppm_nar_close_multiple_events=if(NOT match(bppm_operations_annotations,"OVERRIDE_CLOSED") AND (NOT match(bppm_operations_annotations,"OPERATOR_CLOSED")),"yes", "no") | eval closed_with = if(isnull(closed_with) OR closed_with="Null", INC_CREATED_TSIM, closed_with) | eval closed_with = if(isnull(closed_with) OR closed_with="Null", INC_CREATED_TSIM_2, closed_with) | eval closed_with = if(isnull(closed_with) OR closed_with="Null", MUTIPLE_CLOSURE_ANNOTATION, closed_with) | fillnull value="Null" closed_with | eval time=strftime(_time,"%x-%H:%M:%S") | lookup onepasslk onepass | search bppm_ecdb_env="***" | fillnull value="Null" | stats count(bppm_message) as Total_count, count(eval(like(closed_with, "%INC%"))) as Closed-With-INC, count(eval(like(closed_with, "%CRQ%"))) as Closed-With-CRQ, count(eval(like(closed_with, "%PKE%"))) as Closed-With-PKE, count(eval(like(closed_with, "%WO%"))) as Closed-With-WO by username | sort -Total_count Abv query iam able to pull data as shown in the attached image but i want add 3 more columns Unique INC Count , Unique CRQ Count , Unique WO Count which show distinct count pls help on how to achieve this
I want to list all the Kv store collections through SPL. something like below... | rest /servicesNS/-/- ....... unable to figure out the right rest command and the correct fields . Would appreci... See more...
I want to list all the Kv store collections through SPL. something like below... | rest /servicesNS/-/- ....... unable to figure out the right rest command and the correct fields . Would appreciate any guidance..
Hello everyone I'm new to Splunk . I'm trying to setup/intergrade GitHub Cloud logs into Splunk Cloud. I have managed to get the Splunk GitHub Add-On added to the Splunk IDM. Currently , I'm aw... See more...
Hello everyone I'm new to Splunk . I'm trying to setup/intergrade GitHub Cloud logs into Splunk Cloud. I have managed to get the Splunk GitHub Add-On added to the Splunk IDM. Currently , I'm awaiting for an account for GitHub created . Once , I have the account ,will proceed to setup the inputs. Anyone has done this before? Thanks & Regards Murali
I want to get a search for get sum status error of http_user_agent like second dashboard. I do not know how to sum status like 201, 202 error status becom 2xx.
i have a table who contain multiple keys and value one of them keys{"body"} value are below: "body": "{\n \"Type\" : \"Notification\",\n \"MessageId\" : \"f33b9756-bc6b-5efc-8111-cca792b8d4f3\",\n ... See more...
i have a table who contain multiple keys and value one of them keys{"body"} value are below: "body": "{\n \"Type\" : \"Notification\",\n \"MessageId\" : \"f33b9756-bc6b-5efc-8111-cca792b8d4f3\",\n \"TopicArn\" : \"arn:aws:sns:eu-central-1:108770896200:PL-PRD-notification-media\",\n \"Message\" : \"{\\\"licenseValidFrom\\\":\\\"2022-11-18T07:56:18.760+01:00\\\",\\\"licenseValidUntil\\\": \\\"3022-03-21T07:56:18.760+01:00\\\",\\\"hasCopyright\\\":\\\"False\\\",\\\"resolutionInPx\\\": \\\"685x1664\\\",\\\"resolutionKey\\\":\\\"ORIGINAL\\\",\\\"checksum\\\":\\\"35a63f43ec3088c9cf01b6c5473f1436\\\", \\\"description\\\": \\\"Jewelry Full\\\", \\\"brand\\\": \\\"\\\", \\\"category\\\": \\\"\\\", \\\"mediaType\\\": \\\"AdditionalImage\\\", \\\"status\\\": \\\"Media.Active.490.Finished\\\", \\\"gtin\\\": \\\"9009656409602\\\", \\\"channel\\\": \\\"gkkDigitalDataManagement\\\", \\\"mediaId\\\": \\\"06\\\", \\\"contentType\\\": \\\"image/jpeg\\\"}\",\n \"Timestamp\" : \"2022-11-18T06:56:19.980Z\",\n \"SignatureVersion\" : \"1\",\n \"Signature\" : \"AySfxHK6Y3ZSA7BsgR7sFHva82snBuenk74ZMJ5HzewU4ozOg8PDOnjeBAY0FLbFxomWOEVIzNWp9yW8Ti9lWWNpdzeMd4MYUhN/a0tLwce1Dk0xdAlsM9DByiJHUTWj1QkvUsaJChMaDfZOyFwZNhvHBbtC9W/Y9AtcZnS9ahz8bQBvxIZv/Xb7tK/g0pvOJ2Nx633TN1UStYshQef8g1cV+q4Ey0fMRr9l/K00POuBUCcGZRRXTiGaqVOTWk08ARFsW5a9Iz28kaBz4PDFNdCALgnwdZ65m6k2HL8fYW5O7gvxEqAOLnYcPsX8XLiV20tSd2NBgoytq5f3IxAbsw==\",\n \"MessageAttributes\" : {\n \"channel\" : {\"Type\":\"String\",\"Value\":\"gkkDigitalDataManagement\"},\n \"mediaStatus\" : {\"Type\":\"String\",\"Value\":\"Media.Active.490.Finished\"},\n \"mediaType\" : {\"Type\":\"String\",\"Value\":\"AdditionalImage\"}\n }\n}", want to retrieve [gtin: 9009656409602] in a separate table
Hi, I am working on use case which has following requirements 1. high number of connections to external DNS IPs from non-authortized internal DNS servers (i.e. end users or even servers) 2. conne... See more...
Hi, I am working on use case which has following requirements 1. high number of connections to external DNS IPs from non-authortized internal DNS servers (i.e. end users or even servers) 2. connections have higher upload versus download bytes I am developing query as index=*_fw_* (src=internal_ips) NOT (dest=external_ips) AND (dest_port=53) bytes_out>0 | eventstats sum(bytes_out) AS total_bytes_out by src | eventstats sum(bytes_in) AS total_bytes_in by src | where total_bytes_out > total_bytes_in | stats count by src _time dest dest_port total_bytes_out total_bytes_in sourcetype host app dstcountry ftnt_action index osname packets_out packets_in policyname product service srcmac src_translated_ip srcname subtype eventtype transport user vd vendor vendor_action _raw | sort - total_bytes_out | uniq But I'm getting same source and destination IPs repeated or duplicate entries. I want to view by grouping source ips such that only unique source IPs will be displayed along with all other fields.
Hi , i want to calculate count based on the condition , like in the below query if the event is 'sync' then the 'failed' count should fetch from source="*gps-request-processor-test*" if the e... See more...
Hi , i want to calculate count based on the condition , like in the below query if the event is 'sync' then the 'failed' count should fetch from source="*gps-request-processor-test*" if the event is 'Async' then the 'failed' count should fetch from source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*" index="*dockerlogs*" source="*gps-request-processor-test*" OR source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*" event="*Request" documentType="*" OR labelType="*" | eval LabelType=coalesce(labelType, documentType) | eval event = case (like(event,"%Sync%"),"Sync",like(event,"%Async%"),"Async") | rex mode=sed "s/1067/Windrunner/g" field=sourceNodeCode | rex mode=sed "s/531/SFS/g" field=sourceNodeCode | rex mode=sed "s/EUROPE_MIDDLE_EAST_AFRICA/EMEA/g" field=geoCode | eval Geo=geoCode,Node=sourceNodeCode | eval syncelapsed=if(source like "%gps-request-processor%",elapsedTime,null()) | eval asyncelapsed=if(source like "%gps-external-processor%" OR source like "%gps-artifact-processor%",elapsedTime,null()) | stats count(eval(status="Received" AND source like "%gps-request-processor%" )) as received count(eval(deliveryStatus="Success")) as delivered count(eval(status="Failed")) as failed avg(syncelapsed) as syncelapsedtime avg(asyncelapsed) as asyncelapsedtime avg(deliveryElapsedTime) as deliverytime by Node Geo LabelType event
Hi Dears, When I search only IPs without field names in Firewall indexes search is fast, like: index="EX" "X.X.X.X" OR "X.X.X.X" OR X.X.X.X" OR X.X.X.X" OR X.X.X.X" But when I include field nam... See more...
Hi Dears, When I search only IPs without field names in Firewall indexes search is fast, like: index="EX" "X.X.X.X" OR "X.X.X.X" OR X.X.X.X" OR X.X.X.X" OR X.X.X.X" But when I include field name as in below, the search takes a lot of time specially in Firewall index. (Though I believe it should take less time from above search because it searches for only specific field). index="EX" dest_ip="X.X.X.X"OR dest_ip="X.X.X.X" OR dest_ip="X.X.X.X" OR dest_ip="X.X.X.X" OR dest_ip="X.X.X.X" Please your support. Best Regards,
Hello Splunkers, Workflows are monitored through splunk. Workflows has different stages like running , paused, cancelled and completed. I have to get the latest status of the workflow. I am using s... See more...
Hello Splunkers, Workflows are monitored through splunk. Workflows has different stages like running , paused, cancelled and completed. I have to get the latest status of the workflow. I am using sort - _time option to get the latest data of the status. Along with the search query by using the sort - _time option, data count varies. For last 7 days index=... | table _time EXECUTION_NAME STATUS EXECUTION_ID Stage Environment source | dedup EXECUTION_ID | chart count(EXECUTION_ID) as Workflows_Triggered by Environment,STATUS Environment COMPLETED PAUSED RUNNING XXX 94498 1 56 sort -_time option is used for last 7 days. index=... | table _time EXECUTION_NAME STATUS EXECUTION_ID Stage Environment source | sort -_time | dedup EXECUTION_ID | chart count(EXECUTION_ID) as Workflows_Triggered by Environment,STATUS Environment COMPLETED RUNNING XXX 9735 5 reason for using sort -_time is get the latest status of the execution_id. completed will be appeared when the dedup is done. _time STATUS EXECUTION_ID 2022-11-30 12:20:00.492 RUNNING 12345678901 2022-11-30 12:20:18.000 COMPLETED 12345678911 Requesting for support. Thank you !!!
Hi All, Below is the sample data looks like. sourcetype_1 s1_field1: 123 s1_field2: { { ID: 2 Name: ABC }, { ID: 1 Name: XYZ } } s1_field3 : Completed sourcetype_2 s2_fi... See more...
Hi All, Below is the sample data looks like. sourcetype_1 s1_field1: 123 s1_field2: { { ID: 2 Name: ABC }, { ID: 1 Name: XYZ } } s1_field3 : Completed sourcetype_2 s2_field1: 123 s2_field2: { { CID: 3 Info: XXX }, { CID: 2 Info: YYY }, } s2_field3: N Here first i need to match s1_field1 of source1 to s2_field1 of source2.If its matching then I need to match the s2_field1 's CID of source2 with s1_field2's ID of source 1.If matches then need to fetch the all other fields of both source 1 and source 1. Expecting data like below: ID:2 Name:ABC Info: YYY Please suggest.
Hi, i want to recover my web page password though configuration but in this case i lost my passwd file in ETC folder so, is there any possible to resolve it? can you pleaes
Hi all. I have a running query I see on the jobs page on Splunk but I cannot find the related alert/dashboard it's coming from. There is no name like when an alert is running, but just the search q... See more...
Hi all. I have a running query I see on the jobs page on Splunk but I cannot find the related alert/dashboard it's coming from. There is no name like when an alert is running, but just the search query instead. Is there a query I can reverse search this search query in order to find the alert/dashboard?
This query returns the url with errors at 5m span, I just want to filter out those errors that occur at consecutive intervals, like 9:00 and 9:05. index = index uriPath=url* |bin span=5m _time I st... See more...
This query returns the url with errors at 5m span, I just want to filter out those errors that occur at consecutive intervals, like 9:00 and 9:05. index = index uriPath=url* |bin span=5m _time I stats count as Volume, count(eval(httpCode<=299)) as "Success" , count (eval (httpCode>399)) as Fail by urlPath _time I eval F=round (Fail*100/Volume, 2) | where FP> 2 and Volume > 50
Hi All, Need help on sending data through UF. Background We have single PROD Splunk instance acting as all in one server and all the configs are present in this server(ex. props , transforms... See more...
Hi All, Need help on sending data through UF. Background We have single PROD Splunk instance acting as all in one server and all the configs are present in this server(ex. props , transforms. etc). Currently we are ingesting data using add data from Splunk UI. we are uploading data for couple of sources and using props.conf for data parsing. props.conf is defined on basis on sourcetype ex:sourcetypeA, and this config is present in app called appA. and when we upload data using upload data option data is parsing correctly , this way of ingesting happing for more than year and everything working fine . Current Issue recently we installed UF on one of the system and configured UF to send the data to Splunk instance(which is single component) UF---->SH as a part of testing we sent file A from UF for sourcetype sourcetypeA to Splunk instance , props.conf settings are not applied on search head. later we used the same file A, ingested using data upload option in UI, mentioned souretype as sourcetypeA, parsing working fine(which is excepted behavior). but its not working while sending data from UF Checked internal logs of both UF and SH no errors found for this source type. what causing issue to not apply props ? can you anyone suggest. inputs.conf on UF [monitor://fileA] index = index1 _TCP_ROUTING = uf_default crcSalt = <SOURCE> sourcetype = sourcetypeA props.conf on SH [sourcetypeA] CHARSET = MS-ANSI FIELD_DELIMITER = ; INDEXED_EXTRACTIONS = csv KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = Time TIME_FORMAT = %d.%m.%Y %H:%M TZ = IST category = Structured disabled = false pulldown_type = true TRUNCATE = 50000 FIELD_QUOTE = " BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) EVAL-name = <condition> LOOKUP-name = <condition> FIELDALIAS-name = <condition>
Hi Splunkers I am looking to get some help in spl for following use case | makeresults count=4 | streamstats count | eval src=case(count=1, "2.3.5.6", count=2, "3.3.3.3", count=3, "1.1.1.6", ... See more...
Hi Splunkers I am looking to get some help in spl for following use case | makeresults count=4 | streamstats count | eval src=case(count=1, "2.3.5.6", count=2, "3.3.3.3", count=3, "1.1.1.6", count=4, "4.5.6.4") | eval dest = case(count=1, "4.5.6.4", count=2, "4.5.6.4", count=3, "2.2.2.6", count=4,"2.3.5.6") I want to get only event1 and event4 . In this case event1 src=dest event4 and event1 dest=src event4. This is only a run anywhere example. In real there will be thousands of events and I want to compare event x src=dest event y Thanks Bhupi